sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
894d51ef8e444360826fef970442b4b6e882ff64 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__neqa2_8shot-jeffdshen__neqa2_8shot-959823-1853063406 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:09:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/neqa2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "jeffdshen/neqa2_8shot", "dataset_config": "jeffdshen--neqa2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T03:31:40+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/neqa2_8shot
* Config: jeffdshen--neqa2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: jeffdshen/neqa2_8shot\n* Config: jeffdshen--neqa2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: jeffdshen/neqa2_8shot\n* Config: jeffdshen--neqa2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
1acb7b8cd33ab32069f18e4b3bda902ee86cd7b1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163407 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:10:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:13:41+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
c5c85b748f0add69a515584101f75d31a23c3eec | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163408 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:11:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:17:23+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
d1b0e19328570ff6d6b66feb6f1f1d49cc2586a6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163409 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:13:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:23:27+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
29878dfab55f73640bd769dda9097009ba88cac7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163411 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:20:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:54:20+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
3d4e995498c994515671fe0ffa35466db46aa819 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163410 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:20:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:36:03+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
9774214c388611978defa2b05f2cbb6eafc83ef6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163412 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:23:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T21:27:37+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
2d685476ba41df49df84ce83869ec97f2c48a09d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163413 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:23:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T23:15:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
003cddb5c422851a1ed82a771e069487afd0dbe5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math2_8shot-jeffdshen__redefine_mat-af4c71-1853163414 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:25:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math2_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math2_8shot", "dataset_config": "jeffdshen--redefine_math2_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T02:25:10+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/redefine_math2_8shot
* Config: jeffdshen--redefine_math2_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: jeffdshen/redefine_math2_8shot\n* Config: jeffdshen--redefine_math2_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
80806d78f92ead5ac7d7b71e0aad69d63da69144 | # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
| rufimelo/PortugueseLegalSentences-v0 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2022-10-23T20:27:33+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"]} | 2022-10-23T23:55:55+00:00 | [] | [
"pt"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Portuguese #license-apache-2.0 #region-us
| # Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
### Contributions
@rufimelo99
| [
"# Portuguese Legal Sentences\nCollection of Legal Sentences from the Portuguese Supreme Court of Justice\nThe goal of this dataset was to be used for MLM and TSDAE",
"### Contributions\n@rufimelo99"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Portuguese #license-apache-2.0 #region-us \n",
"# Portuguese Legal Sentences\nCollection of Legal Sentences from the Portuguese Supreme Court of Justice\nThe goal of this dataset was to be used for MLM and TSDAE",
"### Contributions\n@rufimelo99"
] |
71a7df4dec587db7ca75e77e17820f934b9239ee | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263415 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:29:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:32:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
8708ce52df013e02ce64fa1d724dd9658fbe0337 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263417 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:39:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:55:09+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
c79968e3486c761ac1dc22e70ef3543566a865d8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263416 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:39:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T20:45:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
21d6d506cd6554ed5d501ecf3ff9057e3cee19ef | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263418 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:43:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T21:09:30+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
45863e98e30abf429c3674f303b30e6b12a96c49 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263420 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:51:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T22:26:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
9afc868b3ca6999fce836cdddbf46b9a034dcb9a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263419 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T20:51:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-23T21:49:05+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
5b2acfeeae4274be62c8f9a05acea1b1b33b63b8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263421 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T21:00:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T01:54:44+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
f87ed8be2923f9a467f70386ba48da3cab41992f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jeffdshen](https://huggingface.co/jeffdshen) for evaluating this model. | autoevaluate/autoeval-eval-jeffdshen__redefine_math0_8shot-jeffdshen__redefine_mat-1c694b-1853263422 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-23T21:01:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["jeffdshen/redefine_math0_8shot"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "jeffdshen/redefine_math0_8shot", "dataset_config": "jeffdshen--redefine_math0_8shot", "dataset_split": "train", "col_mapping": {"text": "prompt", "classes": "classes", "target": "answer_index"}}} | 2022-10-24T05:32:10+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: jeffdshen/redefine_math0_8shot
* Config: jeffdshen--redefine_math0_8shot
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jeffdshen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: jeffdshen/redefine_math0_8shot\n* Config: jeffdshen--redefine_math0_8shot\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jeffdshen for evaluating this model."
] |
afaaca07fb88eeecf10689a1b9c35b2a143dd599 | # Dataset Card for "malicious_urls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joshtobin/malicious_urls | [
"region:us"
] | 2022-10-23T22:02:35+00:00 | {"dataset_info": {"features": [{"name": "url_len", "dtype": "int64"}, {"name": "abnormal_url", "dtype": "int64"}, {"name": "https", "dtype": "int64"}, {"name": "digits", "dtype": "int64"}, {"name": "letters", "dtype": "int64"}, {"name": "shortening_service", "dtype": "int64"}, {"name": "ip_address", "dtype": "int64"}, {"name": "@", "dtype": "int64"}, {"name": "?", "dtype": "int64"}, {"name": "-", "dtype": "int64"}, {"name": "=", "dtype": "int64"}, {"name": ".", "dtype": "int64"}, {"name": "#", "dtype": "int64"}, {"name": "%", "dtype": "int64"}, {"name": "+", "dtype": "int64"}, {"name": "$", "dtype": "int64"}, {"name": "!", "dtype": "int64"}, {"name": "*", "dtype": "int64"}, {"name": ",", "dtype": "int64"}, {"name": "//", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 32000, "num_examples": 200}], "download_size": 9837, "dataset_size": 32000}} | 2022-10-23T22:28:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "malicious_urls"
More Information needed | [
"# Dataset Card for \"malicious_urls\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"malicious_urls\"\n\nMore Information needed"
] |
4b2859096f19a75f613a7a63183a9fadaa48ba3f |
# Dataset Card for Pokémon BLIP captions with English and Chinese.
Dataset used to train Pokémon text to image model, add a Chinese Column of [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and zh_text (caption in Chinese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Chinese captions are translated by [Deepl](https://www.deepl.com/translator) | svjack/pokemon-blip-captions-en-zh | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:multilingual",
"size_categories:n<1K",
"source_datasets:huggan/few-shot-pokemon",
"language:en",
"language:zh",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-10-24T00:59:52+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en", "zh"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["multilingual"], "size_categories": ["n<1K"], "source_datasets": ["huggan/few-shot-pokemon"], "task_categories": ["text-to-image"], "task_ids": [], "pretty_name": "Pok\u00e9mon BLIP captions", "tags": []} | 2022-10-31T06:23:03+00:00 | [] | [
"en",
"zh"
] | TAGS
#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-multilingual #size_categories-n<1K #source_datasets-huggan/few-shot-pokemon #language-English #language-Chinese #license-cc-by-nc-sa-4.0 #region-us
|
# Dataset Card for Pokémon BLIP captions with English and Chinese.
Dataset used to train Pokémon text to image model, add a Chinese Column of Pokémon BLIP captions
BLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.
For each row the dataset contains image en_text (caption in English) and zh_text (caption in Chinese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.
The Chinese captions are translated by Deepl | [
"# Dataset Card for Pokémon BLIP captions with English and Chinese.\n\nDataset used to train Pokémon text to image model, add a Chinese Column of Pokémon BLIP captions\n\nBLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.\n\nFor each row the dataset contains image en_text (caption in English) and zh_text (caption in Chinese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.\n\nThe Chinese captions are translated by Deepl"
] | [
"TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-multilingual #size_categories-n<1K #source_datasets-huggan/few-shot-pokemon #language-English #language-Chinese #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for Pokémon BLIP captions with English and Chinese.\n\nDataset used to train Pokémon text to image model, add a Chinese Column of Pokémon BLIP captions\n\nBLIP generated captions for Pokémon images from Few Shot Pokémon dataset introduced by Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis (FastGAN). Original images were obtained from FastGAN-pytorch and captioned with the pre-trained BLIP model.\n\nFor each row the dataset contains image en_text (caption in English) and zh_text (caption in Chinese) keys. image is a varying size PIL jpeg, and text is the accompanying text caption. Only a train split is provided.\n\nThe Chinese captions are translated by Deepl"
] |
08ef5a71e9a1381eb205610dda214a5b01e3e55a | # Dataset Card for "speechocean762_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jbpark0614/speechocean762_train | [
"region:us"
] | 2022-10-24T07:57:13+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "speaker_id_str", "dtype": "int64"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "question_id", "dtype": "int64"}, {"name": "total_score", "dtype": "int64"}, {"name": "accuracy", "dtype": "int64"}, {"name": "completeness", "dtype": "float64"}, {"name": "fluency", "dtype": "int64"}, {"name": "prosodic", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 290407029.0, "num_examples": 2500}], "download_size": 316008757, "dataset_size": 290407029.0}} | 2022-10-24T07:58:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "speechocean762_train"
More Information needed | [
"# Dataset Card for \"speechocean762_train\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"speechocean762_train\"\n\nMore Information needed"
] |
7d9d2774a2abed6351ffaddbee0fdb34d7196457 |
# Dataset Card for InfantBooks
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://www.mpi-inf.mpg.de/children-texts-for-commonsense](https://www.mpi-inf.mpg.de/children-texts-for-commonsense)
- **Paper:** Do Children Texts Hold The Key To Commonsense Knowledge?
### Dataset Summary
A dataset of infants/children's books.
### Languages
All the books are in English;
## Dataset Structure
### Data Instances
malis-friend_BookDash-FKB.txt,"Then a taxi driver, hooting around the yard with his wire car. Mali enjoys playing by himself..."
### Data Fields
- title: The title of the book
- content: The content of the book
## Dataset Creation
### Curation Rationale
The goal of the dataset is to study infant books, which are supposed to be easier to understand than normal texts. In particular, the original goal was to study if these texts contain more commonsense knowledge.
### Source Data
#### Initial Data Collection and Normalization
We automatically collected kids' books on the web.
#### Who are the source language producers?
Native speakers.
### Citation Information
```
Romero, J., & Razniewski, S. (2022).
Do Children Texts Hold The Key To Commonsense Knowledge?
In Proceedings of the 2022 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning.
```
| Aunsiels/InfantBooks | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:gpl",
"research paper",
"kids",
"children",
"books",
"region:us"
] | 2022-10-24T07:57:35+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["gpl"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "InfantBooks", "tags": ["research paper", "kids", "children", "books"]} | 2022-10-24T10:20:01+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-gpl #research paper #kids #children #books #region-us
|
# Dataset Card for InfantBooks
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Curation Rationale
- Source Data
- Additional Information
- Citation Information
## Dataset Description
- Homepage: URL
- Paper: Do Children Texts Hold The Key To Commonsense Knowledge?
### Dataset Summary
A dataset of infants/children's books.
### Languages
All the books are in English;
## Dataset Structure
### Data Instances
malis-friend_BookDash-URL,"Then a taxi driver, hooting around the yard with his wire car. Mali enjoys playing by himself..."
### Data Fields
- title: The title of the book
- content: The content of the book
## Dataset Creation
### Curation Rationale
The goal of the dataset is to study infant books, which are supposed to be easier to understand than normal texts. In particular, the original goal was to study if these texts contain more commonsense knowledge.
### Source Data
#### Initial Data Collection and Normalization
We automatically collected kids' books on the web.
#### Who are the source language producers?
Native speakers.
| [
"# Dataset Card for InfantBooks",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Additional Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Paper: Do Children Texts Hold The Key To Commonsense Knowledge?",
"### Dataset Summary\n\nA dataset of infants/children's books.",
"### Languages\n\nAll the books are in English;",
"## Dataset Structure",
"### Data Instances\n\nmalis-friend_BookDash-URL,\"Then a taxi driver, hooting around the yard with his wire car. Mali enjoys playing by himself...\"",
"### Data Fields\n\n- title: The title of the book\n- content: The content of the book",
"## Dataset Creation",
"### Curation Rationale\n\nThe goal of the dataset is to study infant books, which are supposed to be easier to understand than normal texts. In particular, the original goal was to study if these texts contain more commonsense knowledge.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nWe automatically collected kids' books on the web.",
"#### Who are the source language producers?\n\nNative speakers."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-gpl #research paper #kids #children #books #region-us \n",
"# Dataset Card for InfantBooks",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Additional Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Paper: Do Children Texts Hold The Key To Commonsense Knowledge?",
"### Dataset Summary\n\nA dataset of infants/children's books.",
"### Languages\n\nAll the books are in English;",
"## Dataset Structure",
"### Data Instances\n\nmalis-friend_BookDash-URL,\"Then a taxi driver, hooting around the yard with his wire car. Mali enjoys playing by himself...\"",
"### Data Fields\n\n- title: The title of the book\n- content: The content of the book",
"## Dataset Creation",
"### Curation Rationale\n\nThe goal of the dataset is to study infant books, which are supposed to be easier to understand than normal texts. In particular, the original goal was to study if these texts contain more commonsense knowledge.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nWe automatically collected kids' books on the web.",
"#### Who are the source language producers?\n\nNative speakers."
] |
d317974c2e9cf1b847048c49f36760808b2337f6 | # Dataset Card for "speechocean762_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jbpark0614/speechocean762_test | [
"region:us"
] | 2022-10-24T07:58:05+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "speaker_id_str", "dtype": "int64"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "question_id", "dtype": "int64"}, {"name": "total_score", "dtype": "int64"}, {"name": "accuracy", "dtype": "int64"}, {"name": "completeness", "dtype": "float64"}, {"name": "fluency", "dtype": "int64"}, {"name": "prosodic", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "path", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 288402967.0, "num_examples": 2500}], "download_size": 295709940, "dataset_size": 288402967.0}} | 2022-10-24T07:58:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "speechocean762_test"
More Information needed | [
"# Dataset Card for \"speechocean762_test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"speechocean762_test\"\n\nMore Information needed"
] |
8d49c25cba65077c093016cbed51e087f88af77c | # Dataset Card for "speechocean762"
The datasets introduced in
- Zhang, Junbo, et al. "speechocean762: An open-source non-native english speech corpus for pronunciation assessment." arXiv preprint arXiv:2104.01378 (2021).
- Currently, phonetic-level evaluation is omitted (total sentence-level scores are just used.)
- The original full data link: https://github.com/jimbozhang/speechocean762
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jbpark0614/speechocean762 | [
"region:us"
] | 2022-10-24T08:12:33+00:00 | {"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "speaker_id_str", "dtype": "int64"}, {"name": "speaker_id", "dtype": "int64"}, {"name": "question_id", "dtype": "int64"}, {"name": "total_score", "dtype": "int64"}, {"name": "accuracy", "dtype": "int64"}, {"name": "completeness", "dtype": "float64"}, {"name": "fluency", "dtype": "int64"}, {"name": "prosodic", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "path", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 288402967.0, "num_examples": 2500}, {"name": "train", "num_bytes": 290407029.0, "num_examples": 2500}], "download_size": 0, "dataset_size": 578809996.0}} | 2022-10-24T08:43:54+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "speechocean762"
The datasets introduced in
- Zhang, Junbo, et al. "speechocean762: An open-source non-native english speech corpus for pronunciation assessment." arXiv preprint arXiv:2104.01378 (2021).
- Currently, phonetic-level evaluation is omitted (total sentence-level scores are just used.)
- The original full data link: URL
More Information needed | [
"# Dataset Card for \"speechocean762\"\n\nThe datasets introduced in\n- Zhang, Junbo, et al. \"speechocean762: An open-source non-native english speech corpus for pronunciation assessment.\" arXiv preprint arXiv:2104.01378 (2021).\n- Currently, phonetic-level evaluation is omitted (total sentence-level scores are just used.)\n- The original full data link: URL \n\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"speechocean762\"\n\nThe datasets introduced in\n- Zhang, Junbo, et al. \"speechocean762: An open-source non-native english speech corpus for pronunciation assessment.\" arXiv preprint arXiv:2104.01378 (2021).\n- Currently, phonetic-level evaluation is omitted (total sentence-level scores are just used.)\n- The original full data link: URL \n\n\nMore Information needed"
] |
c03ad050756db3748209f1a51ba4b8afc8dcefcb |
# Dataset Card for Parafraseja
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
Parafraseja is a dataset of 21,984 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from [TE-ca](https://huggingface.co/datasets/projecte-aina/teca) and [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca). For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available.
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Supported Tasks and Leaderboards
This dataset is mainly intended to train models for paraphrase detection.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
The dataset consists of pairs of sentences labelled with "Parafrasis" or "No Parafrasis" in a jsonl format.
### Data Instances
<pre>
{
"id": "te1_14977_1",
"source": "teca",
"original": "La 2a part consta de 23 cap\u00edtols, cadascun dels quals descriu un ocell diferent.",
"new": "La segona part consisteix en vint-i-tres cap\u00edtols, cada un dels quals descriu un ocell diferent.",
"label": "Parafrasis"
}
</pre>
### Data Fields
- original: original sentence
- new: new sentence, which could be a paraphrase or a non-paraphrase
- label: relation between original and new
### Data Splits
* dev.json: 2,000 examples
* test.json: 4,000 examples
* train.json: 15,984 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The original sentences of this dataset came from the [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) and the [TE-ca](https://huggingface.co/datasets/projecte-aina/teca).
#### Initial Data Collection and Normalization
11,543 of the original sentences came from TE-ca, and 10,441 came from STS-ca.
#### Who are the source language producers?
TE-ca and STS-ca come from the [Catalan Textual Corpus](https://zenodo.org/record/4519349#.Y1Zs__uxXJF), which consists of several corpora gathered from web crawling and public corpora, and [Vilaweb](https://www.vilaweb.cat), a Catalan newswire.
### Annotations
The dataset is annotated with the label "Parafrasis" or "No Parafrasis" for each pair of sentences.
#### Annotation process
The annotation process was done by a single annotator and reviewed by another.
#### Who are the annotators?
The annotators were Catalan native speakers, with a background on linguistics.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Contributions
[N/A]
| projecte-aina/Parafraseja | [
"task_categories:text-classification",
"task_ids:multi-input-text-classification",
"annotations_creators:CLiC-UB",
"language_creators:found",
"multilinguality:monolingual",
"language:ca",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-10-24T08:54:42+00:00 | {"annotations_creators": ["CLiC-UB"], "language_creators": ["found"], "language": ["ca"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": ["multi-input-text-classification"], "pretty_name": "Parafraseja"} | 2023-11-25T06:09:20+00:00 | [] | [
"ca"
] | TAGS
#task_categories-text-classification #task_ids-multi-input-text-classification #annotations_creators-CLiC-UB #language_creators-found #multilinguality-monolingual #language-Catalan #license-cc-by-nc-nd-4.0 #region-us
|
# Dataset Card for Parafraseja
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Point of Contact: URL@URL
### Dataset Summary
Parafraseja is a dataset of 21,984 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from TE-ca and STS-ca. For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available.
This work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.
### Supported Tasks and Leaderboards
This dataset is mainly intended to train models for paraphrase detection.
### Languages
The dataset is in Catalan ('ca-ES').
## Dataset Structure
The dataset consists of pairs of sentences labelled with "Parafrasis" or "No Parafrasis" in a jsonl format.
### Data Instances
<pre>
{
"id": "te1_14977_1",
"source": "teca",
"original": "La 2a part consta de 23 cap\u00edtols, cadascun dels quals descriu un ocell diferent.",
"new": "La segona part consisteix en vint-i-tres cap\u00edtols, cada un dels quals descriu un ocell diferent.",
"label": "Parafrasis"
}
</pre>
### Data Fields
- original: original sentence
- new: new sentence, which could be a paraphrase or a non-paraphrase
- label: relation between original and new
### Data Splits
* URL: 2,000 examples
* URL: 4,000 examples
* URL: 15,984 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The original sentences of this dataset came from the STS-ca and the TE-ca.
#### Initial Data Collection and Normalization
11,543 of the original sentences came from TE-ca, and 10,441 came from STS-ca.
#### Who are the source language producers?
TE-ca and STS-ca come from the Catalan Textual Corpus, which consists of several corpora gathered from web crawling and public corpora, and Vilaweb, a Catalan newswire.
### Annotations
The dataset is annotated with the label "Parafrasis" or "No Parafrasis" for each pair of sentences.
#### Annotation process
The annotation process was done by a single annotator and reviewed by another.
#### Who are the annotators?
The annotators were Catalan native speakers, with a background on linguistics.
### Personal and Sensitive Information
No personal or sensitive information included.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.
### Licensing Information
This work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.
### Contributions
[N/A]
| [
"# Dataset Card for Parafraseja",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Point of Contact: URL@URL",
"### Dataset Summary\n\nParafraseja is a dataset of 21,984 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from TE-ca and STS-ca. For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available. \n\nThis work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.",
"### Supported Tasks and Leaderboards\n\nThis dataset is mainly intended to train models for paraphrase detection.",
"### Languages\n\n\nThe dataset is in Catalan ('ca-ES').",
"## Dataset Structure\n\nThe dataset consists of pairs of sentences labelled with \"Parafrasis\" or \"No Parafrasis\" in a jsonl format.",
"### Data Instances\n\n<pre>\n {\n \"id\": \"te1_14977_1\", \n \"source\": \"teca\", \n \"original\": \"La 2a part consta de 23 cap\\u00edtols, cadascun dels quals descriu un ocell diferent.\", \n \"new\": \"La segona part consisteix en vint-i-tres cap\\u00edtols, cada un dels quals descriu un ocell diferent.\", \n \"label\": \"Parafrasis\"\n }\n</pre>",
"### Data Fields\n- original: original sentence\n- new: new sentence, which could be a paraphrase or a non-paraphrase\n- label: relation between original and new",
"### Data Splits\n\n* URL: 2,000 examples\n* URL: 4,000 examples\n* URL: 15,984 examples",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this corpus to contribute to the development of language models in Catalan, a low-resource language.",
"### Source Data\n\nThe original sentences of this dataset came from the STS-ca and the TE-ca.",
"#### Initial Data Collection and Normalization\n\n11,543 of the original sentences came from TE-ca, and 10,441 came from STS-ca.",
"#### Who are the source language producers?\n\nTE-ca and STS-ca come from the Catalan Textual Corpus, which consists of several corpora gathered from web crawling and public corpora, and Vilaweb, a Catalan newswire.",
"### Annotations\n\nThe dataset is annotated with the label \"Parafrasis\" or \"No Parafrasis\" for each pair of sentences.",
"#### Annotation process\n\nThe annotation process was done by a single annotator and reviewed by another.",
"#### Who are the annotators?\n\nThe annotators were Catalan native speakers, with a background on linguistics.",
"### Personal and Sensitive Information\n\nNo personal or sensitive information included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope this corpus contributes to the development of language models in Catalan, a low-resource language.",
"### Discussion of Biases\n\nWe are aware that this data might contain biases. We have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\nThis work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.",
"### Contributions\n\n[N/A]"
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-input-text-classification #annotations_creators-CLiC-UB #language_creators-found #multilinguality-monolingual #language-Catalan #license-cc-by-nc-nd-4.0 #region-us \n",
"# Dataset Card for Parafraseja",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Point of Contact: URL@URL",
"### Dataset Summary\n\nParafraseja is a dataset of 21,984 pairs of sentences with a label that indicates if they are paraphrases or not. The original sentences were collected from TE-ca and STS-ca. For each sentence, an annotator wrote a sentence that was a paraphrase and another that was not. The guidelines of this annotation are available. \n\nThis work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.",
"### Supported Tasks and Leaderboards\n\nThis dataset is mainly intended to train models for paraphrase detection.",
"### Languages\n\n\nThe dataset is in Catalan ('ca-ES').",
"## Dataset Structure\n\nThe dataset consists of pairs of sentences labelled with \"Parafrasis\" or \"No Parafrasis\" in a jsonl format.",
"### Data Instances\n\n<pre>\n {\n \"id\": \"te1_14977_1\", \n \"source\": \"teca\", \n \"original\": \"La 2a part consta de 23 cap\\u00edtols, cadascun dels quals descriu un ocell diferent.\", \n \"new\": \"La segona part consisteix en vint-i-tres cap\\u00edtols, cada un dels quals descriu un ocell diferent.\", \n \"label\": \"Parafrasis\"\n }\n</pre>",
"### Data Fields\n- original: original sentence\n- new: new sentence, which could be a paraphrase or a non-paraphrase\n- label: relation between original and new",
"### Data Splits\n\n* URL: 2,000 examples\n* URL: 4,000 examples\n* URL: 15,984 examples",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this corpus to contribute to the development of language models in Catalan, a low-resource language.",
"### Source Data\n\nThe original sentences of this dataset came from the STS-ca and the TE-ca.",
"#### Initial Data Collection and Normalization\n\n11,543 of the original sentences came from TE-ca, and 10,441 came from STS-ca.",
"#### Who are the source language producers?\n\nTE-ca and STS-ca come from the Catalan Textual Corpus, which consists of several corpora gathered from web crawling and public corpora, and Vilaweb, a Catalan newswire.",
"### Annotations\n\nThe dataset is annotated with the label \"Parafrasis\" or \"No Parafrasis\" for each pair of sentences.",
"#### Annotation process\n\nThe annotation process was done by a single annotator and reviewed by another.",
"#### Who are the annotators?\n\nThe annotators were Catalan native speakers, with a background on linguistics.",
"### Personal and Sensitive Information\n\nNo personal or sensitive information included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope this corpus contributes to the development of language models in Catalan, a low-resource language.",
"### Discussion of Biases\n\nWe are aware that this data might contain biases. We have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL)\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\nThis work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.",
"### Contributions\n\n[N/A]"
] |
6af8474d307a30b92b0cc8d550dbf98f4f5d3c85 | # AutoTrain Dataset for project: dragino-7-7-max_495m
## Dataset Description
This dataset has been automatically processed by AutoTrain for project dragino-7-7-max_495m.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_rssi": -91,
"feat_snr": 7.5,
"target": 125.0
},
{
"feat_rssi": -96,
"feat_snr": 5.0,
"target": 125.0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_rssi": "Value(dtype='int64', id=None)",
"feat_snr": "Value(dtype='float64', id=None)",
"target": "Value(dtype='float32', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 853 |
| valid | 286 |
| pcoloc/autotrain-data-dragino-7-7-max_495m | [
"region:us"
] | 2022-10-24T09:08:48+00:00 | {} | 2022-10-24T09:10:04+00:00 | [] | [] | TAGS
#region-us
| AutoTrain Dataset for project: dragino-7-7-max\_495m
====================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project dragino-7-7-max\_495m.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
43a4ac0c18bdd53bd8acc72323296b48339dc121 |
All eight of datasets in ESB can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
```python
from datasets import load_dataset
librispeech = load_dataset("esb/datasets", "librispeech", split="train")
```
- `"esb/datasets"`: the repository namespace. This is fixed for all ESB datasets.
- `"librispeech"`: the dataset name. This can be changed to any of any one of the eight datasets in ESB to download that dataset.
- `split="train"`: the split. Set this to one of train/validation/test to generate a specific split. Omit the `split` argument to generate all splits for a dataset.
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through `load_dataset`:
```python
print(librispeech[0])
```
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
```python
{
'dataset': 'librispeech',
'audio': {'path': '/home/sanchit-gandhi/.cache/huggingface/datasets/downloads/extracted/d2da1969fe9e7d06661b5dc370cf2e3c119a14c35950045bcb76243b264e4f01/374-180298-0000.flac',
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'text': 'chapter sixteen i might have told you of the beginning of this liaison in a few lines but i wanted you to see every step by which we came i to agree to whatever marguerite wished',
'id': '374-180298-0000'
}
```
### Data Fields
- `dataset`: name of the ESB dataset from which the sample is taken.
- `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `text`: the transcription of the audio file.
- `id`: unique id of the data sample.
### Data Preparation
#### Audio
The audio for all ESB datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
#### Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (_<unk>_) or converting symbolic punctuation to spelled out form (_<comma>_ to _,_). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are **not** provided for the test splits. ESB requires you to generate predictions for the test sets and upload them to https://huggingface.co/spaces/esb/leaderboard for scoring.
### Access
All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
### Diagnostic Dataset
ESB contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: [esb/diagnostic-dataset](https://huggingface.co/datasets/esb/diagnostic-dataset).
## Summary of ESB Datasets
| Dataset | Domain | Speaking Style | Train (h) | Dev (h) | Test (h) | Transcriptions | License |
|--------------|-----------------------------|-----------------------|-----------|---------|----------|--------------------|-----------------|
| LibriSpeech | Audiobook | Narrated | 960 | 11 | 11 | Normalised | CC-BY-4.0 |
| Common Voice | Wikipedia | Narrated | 1409 | 27 | 27 | Punctuated & Cased | CC0-1.0 |
| Voxpopuli | European Parliament | Oratory | 523 | 5 | 5 | Punctuated | CC0 |
| TED-LIUM | TED talks | Oratory | 454 | 2 | 3 | Normalised | CC-BY-NC-ND 3.0 |
| GigaSpeech | Audiobook, podcast, YouTube | Narrated, spontaneous | 2500 | 12 | 40 | Punctuated | apache-2.0 |
| SPGISpeech | Fincancial meetings | Oratory, spontaneous | 4900 | 100 | 100 | Punctuated & Cased | User Agreement |
| Earnings-22 | Fincancial meetings | Oratory, spontaneous | 105 | 5 | 5 | Punctuated & Cased | CC-BY-SA-4.0 |
| AMI | Meetings | Spontaneous | 78 | 9 | 9 | Punctuated & Cased | CC-BY-4.0 |
## LibriSpeech
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the [LibriVox](https://librivox.org) project. It is licensed under CC-BY-4.0.
Example Usage:
```python
librispeech = load_dataset("esb/datasets", "librispeech")
```
Train/validation splits:
- `train` (combination of `train.clean.100`, `train.clean.360` and `train.other.500`)
- `validation.clean`
- `validation.other`
Test splits:
- `test.clean`
- `test.other`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
librispeech = load_dataset("esb/datasets", "librispeech", subconfig="clean.100")
```
- `clean.100`: 100 hours of training data from the 'clean' subset
- `clean.360`: 360 hours of training data from the 'clean' subset
- `other.500`: 500 hours of training data from the 'other' subset
## Common Voice
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The speakers are of various nationalities and native languages, with different accents and recording conditions. We use the English subset of version 9.0 (27-4-2022), with approximately 1,400 hours of audio-transcription data. It is licensed under CC0-1.0.
Example usage:
```python
common_voice = load_dataset("esb/datasets", "common_voice", use_auth_token=True)
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## VoxPopuli
VoxPopuli is a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
```python
voxpopuli = load_dataset("esb/datasets", "voxpopuli")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## TED-LIUM
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
```python
tedlium = load_dataset("esb/datasets", "tedlium")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## GigaSpeech
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
```python
gigaspeech = load_dataset("esb/datasets", "gigaspeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (2,500 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
gigaspeech = load_dataset("esb/datasets", "spgispeech", subconfig="xs", use_auth_token=True)
```
- `xs`: extra-small subset of training data (10 h)
- `s`: small subset of training data (250 h)
- `m`: medium subset of training data (1,000 h)
- `xl`: extra-large subset of training data (10,000 h)
## SPGISpeech
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
```python
spgispeech = load_dataset("esb/datasets", "spgispeech", use_auth_token=True)
```
Training/validation splits:
- `train` (`l` subset of training data (~5,000 h))
- `validation`
Test splits:
- `test`
Also available are subsets of the train split, which can be accessed by setting the `subconfig` argument:
```python
spgispeech = load_dataset("esb/datasets", "spgispeech", subconfig="s", use_auth_token=True)
```
- `s`: small subset of training data (~200 h)
- `m`: medium subset of training data (~1,000 h)
## Earnings-22
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
```python
earnings22 = load_dataset("esb/datasets", "earnings22")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test`
## AMI
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
```python
ami = load_dataset("esb/datasets", "ami")
```
Training/validation splits:
- `train`
- `validation`
Test splits:
- `test` | esb/datasets | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|librispeech_asr",
"source_datasets:extended|common_voice",
"language:en",
"license:cc-by-4.0",
"license:apache-2.0",
"license:cc0-1.0",
"license:cc-by-nc-3.0",
"license:other",
"asr",
"benchmark",
"speech",
"esb",
"region:us"
] | 2022-10-24T09:53:50+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0", "apache-2.0", "cc0-1.0", "cc-by-nc-3.0", "other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1M<n<10M"], "source_datasets": ["original", "extended|librispeech_asr", "extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "datasets", "tags": ["asr", "benchmark", "speech", "esb"], "extra_gated_prompt": "Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. \nTo do so, fill in the access forms on the specific datasets' pages:\n * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0\n * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech\n * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech", "extra_gated_fields": {"I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset": "checkbox", "I hereby confirm that I have accepted the terms of usages on GigaSpeech page": "checkbox", "I hereby confirm that I have accepted the terms of usages on SPGISpeech page": "checkbox"}} | 2023-01-16T17:51:39+00:00 | [] | [
"en"
] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|librispeech_asr #source_datasets-extended|common_voice #language-English #license-cc-by-4.0 #license-apache-2.0 #license-cc0-1.0 #license-cc-by-nc-3.0 #license-other #asr #benchmark #speech #esb #region-us
| All eight of datasets in ESB can be downloaded and prepared in just a single line of code through the Hugging Face Datasets library:
* '"esb/datasets"': the repository namespace. This is fixed for all ESB datasets.
* '"librispeech"': the dataset name. This can be changed to any of any one of the eight datasets in ESB to download that dataset.
* 'split="train"': the split. Set this to one of train/validation/test to generate a specific split. Omit the 'split' argument to generate all splits for a dataset.
The datasets are full prepared, such that the audio and transcription files can be used directly in training/evaluation scripts.
Dataset Information
-------------------
A data point can be accessed by indexing the dataset object loaded through 'load\_dataset':
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
### Data Fields
* 'dataset': name of the ESB dataset from which the sample is taken.
* 'audio': a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
* 'text': the transcription of the audio file.
* 'id': unique id of the data sample.
### Data Preparation
#### Audio
The audio for all ESB datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.
Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, i.e. 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
#### Transcriptions
The transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (*<unk>*) or converting symbolic punctuation to spelled out form (*<comma>* to *,*). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.
Transcriptions are provided for training and validation splits. The transcriptions are not provided for the test splits. ESB requires you to generate predictions for the test sets and upload them to URL for scoring.
### Access
All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: URL
* GigaSpeech: URL
* SPGISpeech: URL
### Diagnostic Dataset
ESB contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: esb/diagnostic-dataset.
Summary of ESB Datasets
-----------------------
LibriSpeech
-----------
The LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the LibriVox project. It is licensed under CC-BY-4.0.
Example Usage:
Train/validation splits:
* 'train' (combination of 'URL.100', 'URL.360' and 'URL.500')
* 'URL'
* 'URL'
Test splits:
* 'URL'
* 'URL'
Also available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:
* 'clean.100': 100 hours of training data from the 'clean' subset
* 'clean.360': 360 hours of training data from the 'clean' subset
* 'other.500': 500 hours of training data from the 'other' subset
Common Voice
------------
Common Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The speakers are of various nationalities and native languages, with different accents and recording conditions. We use the English subset of version 9.0 (27-4-2022), with approximately 1,400 hours of audio-transcription data. It is licensed under CC0-1.0.
Example usage:
Training/validation splits:
* 'train'
* 'validation'
Test splits:
* 'test'
VoxPopuli
---------
VoxPopuli is a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.
Example usage:
Training/validation splits:
* 'train'
* 'validation'
Test splits:
* 'test'
TED-LIUM
--------
TED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.
Example usage:
Training/validation splits:
* 'train'
* 'validation'
Test splits:
* 'test'
GigaSpeech
----------
GigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.
Example usage:
Training/validation splits:
* 'train' ('l' subset of training data (2,500 h))
* 'validation'
Test splits:
* 'test'
Also available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:
* 'xs': extra-small subset of training data (10 h)
* 's': small subset of training data (250 h)
* 'm': medium subset of training data (1,000 h)
* 'xl': extra-large subset of training data (10,000 h)
SPGISpeech
----------
SPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.
Loading the dataset requires authorization.
Example usage:
Training/validation splits:
* 'train' ('l' subset of training data (~5,000 h))
* 'validation'
Test splits:
* 'test'
Also available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:
* 's': small subset of training data (~200 h)
* 'm': medium subset of training data (~1,000 h)
Earnings-22
-----------
Earnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.
Example usage:
Training/validation splits:
* 'train'
* 'validation'
Test splits:
* 'test'
AMI
---
The AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.
Example usage:
Training/validation splits:
* 'train'
* 'validation'
Test splits:
* 'test'
| [
"### Data Fields\n\n\n* 'dataset': name of the ESB dataset from which the sample is taken.\n* 'audio': a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.\n* 'text': the transcription of the audio file.\n* 'id': unique id of the data sample.",
"### Data Preparation",
"#### Audio\n\n\nThe audio for all ESB datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.\n\n\nNote that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, i.e. 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.",
"#### Transcriptions\n\n\nThe transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (*<unk>*) or converting symbolic punctuation to spelled out form (*<comma>* to *,*). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.\n\n\nTranscriptions are provided for training and validation splits. The transcriptions are not provided for the test splits. ESB requires you to generate predictions for the test sets and upload them to URL for scoring.",
"### Access\n\n\nAll eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:\n\n\n* Common Voice: URL\n* GigaSpeech: URL\n* SPGISpeech: URL",
"### Diagnostic Dataset\n\n\nESB contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: esb/diagnostic-dataset.\n\n\nSummary of ESB Datasets\n-----------------------\n\n\n\nLibriSpeech\n-----------\n\n\nThe LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the LibriVox project. It is licensed under CC-BY-4.0.\n\n\nExample Usage:\n\n\nTrain/validation splits:\n\n\n* 'train' (combination of 'URL.100', 'URL.360' and 'URL.500')\n* 'URL'\n* 'URL'\n\n\nTest splits:\n\n\n* 'URL'\n* 'URL'\n\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n\n* 'clean.100': 100 hours of training data from the 'clean' subset\n* 'clean.360': 360 hours of training data from the 'clean' subset\n* 'other.500': 500 hours of training data from the 'other' subset\n\n\nCommon Voice\n------------\n\n\nCommon Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The speakers are of various nationalities and native languages, with different accents and recording conditions. We use the English subset of version 9.0 (27-4-2022), with approximately 1,400 hours of audio-transcription data. It is licensed under CC0-1.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nVoxPopuli\n---------\n\n\nVoxPopuli is a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nTED-LIUM\n--------\n\n\nTED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nGigaSpeech\n----------\n\n\nGigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train' ('l' subset of training data (2,500 h))\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n\n* 'xs': extra-small subset of training data (10 h)\n* 's': small subset of training data (250 h)\n* 'm': medium subset of training data (1,000 h)\n* 'xl': extra-large subset of training data (10,000 h)\n\n\nSPGISpeech\n----------\n\n\nSPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.\n\n\nLoading the dataset requires authorization.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train' ('l' subset of training data (~5,000 h))\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n\n* 's': small subset of training data (~200 h)\n* 'm': medium subset of training data (~1,000 h)\n\n\nEarnings-22\n-----------\n\n\nEarnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nAMI\n---\n\n\nThe AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|librispeech_asr #source_datasets-extended|common_voice #language-English #license-cc-by-4.0 #license-apache-2.0 #license-cc0-1.0 #license-cc-by-nc-3.0 #license-other #asr #benchmark #speech #esb #region-us \n",
"### Data Fields\n\n\n* 'dataset': name of the ESB dataset from which the sample is taken.\n* 'audio': a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.\n* 'text': the transcription of the audio file.\n* 'id': unique id of the data sample.",
"### Data Preparation",
"#### Audio\n\n\nThe audio for all ESB datasets is segmented into sample lengths suitable for training ASR systems. The Hugging Face datasets library decodes audio files on the fly, reading the segments and converting them to a Python arrays. Consequently, no further preparation of the audio is required to be used in training/evaluation scripts.\n\n\nNote that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, i.e. 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.",
"#### Transcriptions\n\n\nThe transcriptions corresponding to each audio file are provided in their 'error corrected' format. No transcription pre-processing is applied to the text, only necessary 'error correction' steps such as removing junk tokens (*<unk>*) or converting symbolic punctuation to spelled out form (*<comma>* to *,*). As such, no further preparation of the transcriptions is required to be used in training/evaluation scripts.\n\n\nTranscriptions are provided for training and validation splits. The transcriptions are not provided for the test splits. ESB requires you to generate predictions for the test sets and upload them to URL for scoring.",
"### Access\n\n\nAll eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:\n\n\n* Common Voice: URL\n* GigaSpeech: URL\n* SPGISpeech: URL",
"### Diagnostic Dataset\n\n\nESB contains a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions. For more information, visit: esb/diagnostic-dataset.\n\n\nSummary of ESB Datasets\n-----------------------\n\n\n\nLibriSpeech\n-----------\n\n\nThe LibriSpeech corpus is a standard large-scale corpus for assessing ASR systems. It consists of approximately 1,000 hours of narrated audiobooks from the LibriVox project. It is licensed under CC-BY-4.0.\n\n\nExample Usage:\n\n\nTrain/validation splits:\n\n\n* 'train' (combination of 'URL.100', 'URL.360' and 'URL.500')\n* 'URL'\n* 'URL'\n\n\nTest splits:\n\n\n* 'URL'\n* 'URL'\n\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n\n* 'clean.100': 100 hours of training data from the 'clean' subset\n* 'clean.360': 360 hours of training data from the 'clean' subset\n* 'other.500': 500 hours of training data from the 'other' subset\n\n\nCommon Voice\n------------\n\n\nCommon Voice is a series of crowd-sourced open-licensed speech datasets where speakers record text from Wikipedia in various languages. The speakers are of various nationalities and native languages, with different accents and recording conditions. We use the English subset of version 9.0 (27-4-2022), with approximately 1,400 hours of audio-transcription data. It is licensed under CC0-1.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nVoxPopuli\n---------\n\n\nVoxPopuli is a large-scale multilingual speech corpus consisting of political data sourced from 2009-2020 European Parliament event recordings. The English subset contains approximately 550 hours of speech largely from non-native English speakers. It is licensed under CC0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nTED-LIUM\n--------\n\n\nTED-LIUM consists of English-language TED Talk conference videos covering a range of different cultural, political, and academic topics. It contains approximately 450 hours of transcribed speech data. It is licensed under CC-BY-NC-ND 3.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nGigaSpeech\n----------\n\n\nGigaSpeech is a multi-domain English speech recognition corpus created from audiobooks, podcasts and YouTube. We provide the large train set (2,500 hours) and the standard validation and test splits. It is licensed under apache-2.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train' ('l' subset of training data (2,500 h))\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n\n* 'xs': extra-small subset of training data (10 h)\n* 's': small subset of training data (250 h)\n* 'm': medium subset of training data (1,000 h)\n* 'xl': extra-large subset of training data (10,000 h)\n\n\nSPGISpeech\n----------\n\n\nSPGISpeech consists of company earnings calls that have been manually transcribed by S&P Global, Inc according to a professional style guide. We provide the large train set (5,000 hours) and the standard validation and test splits. It is licensed under a Kensho user agreement.\n\n\nLoading the dataset requires authorization.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train' ('l' subset of training data (~5,000 h))\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nAlso available are subsets of the train split, which can be accessed by setting the 'subconfig' argument:\n\n\n* 's': small subset of training data (~200 h)\n* 'm': medium subset of training data (~1,000 h)\n\n\nEarnings-22\n-----------\n\n\nEarnings-22 is a 119-hour corpus of English-language earnings calls collected from global companies, with speakers of many different nationalities and accents. It is licensed under CC-BY-SA-4.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'\n\n\nAMI\n---\n\n\nThe AMI Meeting Corpus consists of 100 hours of meeting recordings from multiple recording devices synced to a common timeline. It is licensed under CC-BY-4.0.\n\n\nExample usage:\n\n\nTraining/validation splits:\n\n\n* 'train'\n* 'validation'\n\n\nTest splits:\n\n\n* 'test'"
] |
38506bb37ab1b2a64cccec06ca1318b76ed8a2b2 |
# Dataset Card for GuiaCat
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
GuiaCat is a dataset consisting of 5.750 restaurant reviews in Catalan, with 5 associated scores and a label of sentiment. The data was provided by [GuiaCat](https://guiacat.cat) and curated by the BSC.
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Supported Tasks and Leaderboards
This corpus is mainly intended for sentiment analysis.
### Languages
The dataset is in Catalan (`ca-ES`).
## Dataset Structure
The dataset consists of restaurant reviews labelled with 5 scores: service, food, price-quality, environment, and average. Reviews also have a sentiment label, derived from the average score, all stored as a csv file.
### Data Instances
```
7,7,7,7,7.0,"Aquest restaurant té una llarga història. Ara han tornat a canviar d'amos i aquest canvi s'ha vist molt repercutit en la carta, preus, servei, etc. Hi ha molta varietat de menjar, i tot boníssim, amb especialitats molt ben trobades. El servei molt càlid i agradable, dóna gust que et serveixin així. I la decoració molt agradable també, bastant curiosa. En fi, pel meu gust, un bon restaurant i bé de preu.",bo
8,9,8,7,8.0,"Molt recomanable en tots els sentits. El servei és molt atent, pulcre i gens agobiant; alhora els plats també presenten un aspecte acurat, cosa que fa, juntament amb l'ambient, que t'oblidis de que, malauradament, està situat pròxim a l'autopista.Com deia, l'ambient és molt acollidor, té un menjador principal molt elegant, perfecte per quedar bé amb tothom!Tot i això, destacar la bona calitat / preu, ja que aquest restaurant té una carta molt extensa en totes les branques i completa, tant de menjar com de vins. Pel qui entengui de vins, podriem dir que tot i tenir una carta molt rica, es recolza una mica en els clàssics.",molt bo
```
### Data Fields
- service: a score from 0 to 10 grading the service
- food: a score from 0 to 10 grading the food
- price-quality: a score from 0 to 10 grading the relation between price and quality
- environment: a score from 0 to 10 grading the environment
- avg: average of all the scores
- text: the review
- label: it can be "molt bo", "bo", "regular", "dolent", "molt dolent"
### Data Splits
* dev.csv: 500 examples
* test.csv: 500 examples
* train.csv: 4,750 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The data of this dataset has been provided by [GuiaCat](https://guiacat.cat).
#### Initial Data Collection and Normalization
[N/A]
#### Who are the source language producers?
The language producers were the users from GuiaCat.
### Annotations
The annotations are automatically derived from the scores that the users provided while reviewing the restaurants.
#### Annotation process
The mapping between average scores and labels is:
- Higher than 8: molt bo
- Between 8 and 6: bo
- Between 6 and 4: regular
- Between 4 and 2: dolent
- Less than 2: molt dolent
#### Who are the annotators?
Users
### Personal and Sensitive Information
No personal information included, although it could contain hate or abusive language.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]).
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
### Licensing Information
This work is licensed under a [Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License](https://creativecommons.org/licenses/by-nc-nd/4.0/).
### Citation Information
```
```
### Contributions
We want to thank GuiaCat for providing this data.
| projecte-aina/GuiaCat | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:sentiment-scoring",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"language:ca",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-10-24T10:11:31+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["ca"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "sentiment-scoring"], "pretty_name": "GuiaCat"} | 2023-11-25T06:27:37+00:00 | [] | [
"ca"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #task_ids-sentiment-scoring #annotations_creators-found #language_creators-found #multilinguality-monolingual #language-Catalan #license-cc-by-nc-nd-4.0 #region-us
|
# Dataset Card for GuiaCat
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Point of Contact: URL@URL
### Dataset Summary
GuiaCat is a dataset consisting of 5.750 restaurant reviews in Catalan, with 5 associated scores and a label of sentiment. The data was provided by GuiaCat and curated by the BSC.
This work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.
### Supported Tasks and Leaderboards
This corpus is mainly intended for sentiment analysis.
### Languages
The dataset is in Catalan ('ca-ES').
## Dataset Structure
The dataset consists of restaurant reviews labelled with 5 scores: service, food, price-quality, environment, and average. Reviews also have a sentiment label, derived from the average score, all stored as a csv file.
### Data Instances
### Data Fields
- service: a score from 0 to 10 grading the service
- food: a score from 0 to 10 grading the food
- price-quality: a score from 0 to 10 grading the relation between price and quality
- environment: a score from 0 to 10 grading the environment
- avg: average of all the scores
- text: the review
- label: it can be "molt bo", "bo", "regular", "dolent", "molt dolent"
### Data Splits
* URL: 500 examples
* URL: 500 examples
* URL: 4,750 examples
## Dataset Creation
### Curation Rationale
We created this corpus to contribute to the development of language models in Catalan, a low-resource language.
### Source Data
The data of this dataset has been provided by GuiaCat.
#### Initial Data Collection and Normalization
[N/A]
#### Who are the source language producers?
The language producers were the users from GuiaCat.
### Annotations
The annotations are automatically derived from the scores that the users provided while reviewing the restaurants.
#### Annotation process
The mapping between average scores and labels is:
- Higher than 8: molt bo
- Between 8 and 6: bo
- Between 6 and 4: regular
- Between 4 and 2: dolent
- Less than 2: molt dolent
#### Who are the annotators?
Users
### Personal and Sensitive Information
No personal information included, although it could contain hate or abusive language.
## Considerations for Using the Data
### Social Impact of Dataset
We hope this corpus contributes to the development of language models in Catalan, a low-resource language.
### Discussion of Biases
We are aware that this data might contain biases. We have not applied any steps to reduce their impact.
### Other Known Limitations
[N/A]
## Additional Information
### Dataset Curators
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.
### Licensing Information
This work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.
### Contributions
We want to thank GuiaCat for providing this data.
| [
"# Dataset Card for GuiaCat",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Point of Contact: URL@URL",
"### Dataset Summary\n\nGuiaCat is a dataset consisting of 5.750 restaurant reviews in Catalan, with 5 associated scores and a label of sentiment. The data was provided by GuiaCat and curated by the BSC. \n\nThis work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.",
"### Supported Tasks and Leaderboards\n\nThis corpus is mainly intended for sentiment analysis.",
"### Languages\n\nThe dataset is in Catalan ('ca-ES').",
"## Dataset Structure\n\nThe dataset consists of restaurant reviews labelled with 5 scores: service, food, price-quality, environment, and average. Reviews also have a sentiment label, derived from the average score, all stored as a csv file.",
"### Data Instances",
"### Data Fields\n- service: a score from 0 to 10 grading the service\n- food: a score from 0 to 10 grading the food\n- price-quality: a score from 0 to 10 grading the relation between price and quality\n- environment: a score from 0 to 10 grading the environment\n- avg: average of all the scores\n- text: the review\n- label: it can be \"molt bo\", \"bo\", \"regular\", \"dolent\", \"molt dolent\"",
"### Data Splits\n\n* URL: 500 examples\n* URL: 500 examples\n* URL: 4,750 examples",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this corpus to contribute to the development of language models in Catalan, a low-resource language.",
"### Source Data\n\nThe data of this dataset has been provided by GuiaCat.",
"#### Initial Data Collection and Normalization\n\n[N/A]",
"#### Who are the source language producers?\n\nThe language producers were the users from GuiaCat.",
"### Annotations\n\nThe annotations are automatically derived from the scores that the users provided while reviewing the restaurants.",
"#### Annotation process\n\nThe mapping between average scores and labels is:\n- Higher than 8: molt bo\n- Between 8 and 6: bo\n- Between 6 and 4: regular\n- Between 4 and 2: dolent\n- Less than 2: molt dolent",
"#### Who are the annotators?\n\nUsers",
"### Personal and Sensitive Information\n\nNo personal information included, although it could contain hate or abusive language.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope this corpus contributes to the development of language models in Catalan, a low-resource language.",
"### Discussion of Biases\n\nWe are aware that this data might contain biases. We have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\nThis work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.",
"### Contributions\n\nWe want to thank GuiaCat for providing this data."
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-sentiment-scoring #annotations_creators-found #language_creators-found #multilinguality-monolingual #language-Catalan #license-cc-by-nc-nd-4.0 #region-us \n",
"# Dataset Card for GuiaCat",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Point of Contact: URL@URL",
"### Dataset Summary\n\nGuiaCat is a dataset consisting of 5.750 restaurant reviews in Catalan, with 5 associated scores and a label of sentiment. The data was provided by GuiaCat and curated by the BSC. \n\nThis work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.",
"### Supported Tasks and Leaderboards\n\nThis corpus is mainly intended for sentiment analysis.",
"### Languages\n\nThe dataset is in Catalan ('ca-ES').",
"## Dataset Structure\n\nThe dataset consists of restaurant reviews labelled with 5 scores: service, food, price-quality, environment, and average. Reviews also have a sentiment label, derived from the average score, all stored as a csv file.",
"### Data Instances",
"### Data Fields\n- service: a score from 0 to 10 grading the service\n- food: a score from 0 to 10 grading the food\n- price-quality: a score from 0 to 10 grading the relation between price and quality\n- environment: a score from 0 to 10 grading the environment\n- avg: average of all the scores\n- text: the review\n- label: it can be \"molt bo\", \"bo\", \"regular\", \"dolent\", \"molt dolent\"",
"### Data Splits\n\n* URL: 500 examples\n* URL: 500 examples\n* URL: 4,750 examples",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this corpus to contribute to the development of language models in Catalan, a low-resource language.",
"### Source Data\n\nThe data of this dataset has been provided by GuiaCat.",
"#### Initial Data Collection and Normalization\n\n[N/A]",
"#### Who are the source language producers?\n\nThe language producers were the users from GuiaCat.",
"### Annotations\n\nThe annotations are automatically derived from the scores that the users provided while reviewing the restaurants.",
"#### Annotation process\n\nThe mapping between average scores and labels is:\n- Higher than 8: molt bo\n- Between 8 and 6: bo\n- Between 6 and 4: regular\n- Between 4 and 2: dolent\n- Less than 2: molt dolent",
"#### Who are the annotators?\n\nUsers",
"### Personal and Sensitive Information\n\nNo personal information included, although it could contain hate or abusive language.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope this corpus contributes to the development of language models in Catalan, a low-resource language.",
"### Discussion of Biases\n\nWe are aware that this data might contain biases. We have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n[N/A]",
"## Additional Information",
"### Dataset Curators\n\nText Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@URL).\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\nThis work is licensed under a Creative Commons Attribution Non-commercial No-Derivatives 4.0 International License.",
"### Contributions\n\nWe want to thank GuiaCat for providing this data."
] |
f0f1642c872cb3fe346c1805b06c7f72900255f7 | ## Dataset Description
- **Homepage:** https://www.darrow.ai/
- **Repository:** https://github.com/darrow-labs/ClassActionPrediction
- **Paper:** https://arxiv.org/abs/2211.00582
- **Leaderboard:** N/A
- **Point of Contact:** [Gila Hayat](mailto:[email protected]),[Gil Semo](mailto:[email protected])
#### More Details & Collaborations
Feel free to contact us in order to get a larger dataset.
We would be happy to collaborate on future works.
### Dataset Summary
USClassActions is an English dataset of 3K complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool.
### Data Instances
```python
from datasets import load_dataset
dataset = load_dataset('darrow-ai/USClassActions')
```
### Data Fields
`id`: (**int**) a unique identifier of the document \
`target_text`: (**str**) the complaint text \
`verdict`: (**str**) the outcome of the case \
### Curation Rationale
The dataset was curated by Darrow.ai (2022).
### Citation Information
*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*
*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*
*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*
```
@InProceedings{Darrow-Niklaus-2022,
author = {Semo, Gil
and Bernsohn, Dor
and Hagag, Ben
and Hayat, Gila
and Niklaus, Joel},
title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US},
booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop},
year = {2022},
location = {Abu Dhabi, EMNLP2022},
}
``` | darrow-ai/USClassActions | [
"task_categories:text-classification",
"task_categories:zero-shot-classification",
"language:en",
"license:gpl-3.0",
"legal",
"legalnlp",
"class action",
"darrow",
"arxiv:2211.00582",
"region:us"
] | 2022-10-24T11:00:55+00:00 | {"language": ["en"], "license": "gpl-3.0", "task_categories": ["text-classification", "zero-shot-classification"], "tags": ["legal", "legalnlp", "class action", "darrow"]} | 2024-01-24T10:00:39+00:00 | [
"2211.00582"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-zero-shot-classification #language-English #license-gpl-3.0 #legal #legalnlp #class action #darrow #arxiv-2211.00582 #region-us
| ## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: N/A
- Point of Contact: Gila Hayat,Gil Semo
#### More Details & Collaborations
Feel free to contact us in order to get a larger dataset.
We would be happy to collaborate on future works.
### Dataset Summary
USClassActions is an English dataset of 3K complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using URL proprietary tool.
### Data Instances
### Data Fields
'id': (int) a unique identifier of the document \
'target_text': (str) the complaint text \
'verdict': (str) the outcome of the case \
### Curation Rationale
The dataset was curated by URL (2022).
*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*
*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*
*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*
| [
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: N/A\n- Point of Contact: Gila Hayat,Gil Semo",
"#### More Details & Collaborations\nFeel free to contact us in order to get a larger dataset.\nWe would be happy to collaborate on future works.",
"### Dataset Summary\n\nUSClassActions is an English dataset of 3K complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using URL proprietary tool.",
"### Data Instances",
"### Data Fields\n'id': (int) a unique identifier of the document \\\n'target_text': (str) the complaint text \\\n'verdict': (str) the outcome of the case \\",
"### Curation Rationale\n\nThe dataset was curated by URL (2022).\n\n\n\n*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*\n*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*\n*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*"
] | [
"TAGS\n#task_categories-text-classification #task_categories-zero-shot-classification #language-English #license-gpl-3.0 #legal #legalnlp #class action #darrow #arxiv-2211.00582 #region-us \n",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: N/A\n- Point of Contact: Gila Hayat,Gil Semo",
"#### More Details & Collaborations\nFeel free to contact us in order to get a larger dataset.\nWe would be happy to collaborate on future works.",
"### Dataset Summary\n\nUSClassActions is an English dataset of 3K complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using URL proprietary tool.",
"### Data Instances",
"### Data Fields\n'id': (int) a unique identifier of the document \\\n'target_text': (str) the complaint text \\\n'verdict': (str) the outcome of the case \\",
"### Curation Rationale\n\nThe dataset was curated by URL (2022).\n\n\n\n*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*\n*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*\n*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*"
] |
91c9f5f11a05c71bc9a2a44628ce04d0b39d9cf0 |
# Dataset Card for Quasimodo
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/commonsense/quasimodo
- **Repository:** https://github.com/Aunsiels/CSK
- **Paper:** Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019
### Dataset Summary
A commonsense knowledge base constructed automatically from question-answering forums and query logs.
### Supported Tasks and Leaderboards
Can be useful for tasks requiring external knowledge such as question answering.
### Languages
English
## Dataset Structure
### Data Instances
```python
{
"subject": "elephant",
"predicate": "has_body_part"
"object": "trunk",
"modality": "TBC[so long trunks] x#x2 // TBC[long trunks] x#x9 // TBC[big trunks] x#x6 // TBC[long trunk] x#x1 // TBC[such big trunks] x#x1 0 0.9999667967035647 elephants have trunks x#x34 x#xGoogle Autocomplete, Bing Autocomplete, Yahoo Questions, Answers.com Questions, Reddit Questions // a elephants have trunks x#x2 x#xGoogle Autocomplete // a elephant have a trunk x#x2 x#xGoogle Autocomplete // elephants have so long trunks x#x2 x#xGoogle Autocomplete // elephants have long trunks x#x8 x#xGoogle Autocomplete, Yahoo Questions, Answers.com Questions // elephants have big trunks x#x6 x#xGoogle Autocomplete, Answers.com Questions, Reddit Questions // elephants have trunk x#x3 x#xGoogle Autocomplete, Yahoo Questions // elephant have long trunks x#x1 x#xGoogle Autocomplete // elephant has a trunk x#x1 x#xGoogle Autocomplete // elephants have a trunk x#x2 x#xAnswers.com Questions // an elephant has a long trunk x#x1 x#xAnswers.com Questions // elephant have trunks x#x1 x#xAnswers.com Questions // elephants have such big trunks x#x1 x#xReddit Questions",
"score": 0.9999667967668732,
"local_sigma": 1.0
}
```
### Data Fields
- subject: The subject of the triple
- predicate: The predicate of the triple
- object: The object of the triple
- modality: Modalities associated with the triples with their counts. TBC means the object can be further refined to the listed objects
- is_negative: 1 if the statement was negated
- score: salience score of the supervised scoring model
- local sigma: strict conditional probability of observing a (predicate, object) with a specific subject. I.e., a measure of how unique a statement is. E.g., local_sigma(lawyers, defend, serial_killers) = 1, local_sigma(lawyers, make, money) = 0.01, even though both statements have a similar score of 0.99.
## Dataset Creation
See original paper.
## Additional Information
### Licensing Information
CC-BY 2.0
### Citation Information
Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019
| Aunsiels/Quasimodo | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en",
"license:cc-by-2.0",
"knowledge base",
"commonsense",
"region:us"
] | 2022-10-24T11:01:21+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["closed-domain-qa"], "pretty_name": "Quasimodo", "tags": ["knowledge base", "commonsense"]} | 2022-10-24T11:30:23+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-cc-by-2.0 #knowledge base #commonsense #region-us
|
# Dataset Card for Quasimodo
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Additional Information
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019
### Dataset Summary
A commonsense knowledge base constructed automatically from question-answering forums and query logs.
### Supported Tasks and Leaderboards
Can be useful for tasks requiring external knowledge such as question answering.
### Languages
English
## Dataset Structure
### Data Instances
### Data Fields
- subject: The subject of the triple
- predicate: The predicate of the triple
- object: The object of the triple
- modality: Modalities associated with the triples with their counts. TBC means the object can be further refined to the listed objects
- is_negative: 1 if the statement was negated
- score: salience score of the supervised scoring model
- local sigma: strict conditional probability of observing a (predicate, object) with a specific subject. I.e., a measure of how unique a statement is. E.g., local_sigma(lawyers, defend, serial_killers) = 1, local_sigma(lawyers, make, money) = 0.01, even though both statements have a similar score of 0.99.
## Dataset Creation
See original paper.
## Additional Information
### Licensing Information
CC-BY 2.0
Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019
| [
"# Dataset Card for Quasimodo",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019",
"### Dataset Summary\n\nA commonsense knowledge base constructed automatically from question-answering forums and query logs.",
"### Supported Tasks and Leaderboards\n\nCan be useful for tasks requiring external knowledge such as question answering.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- subject: The subject of the triple\n- predicate: The predicate of the triple\n- object: The object of the triple\n- modality: Modalities associated with the triples with their counts. TBC means the object can be further refined to the listed objects\n- is_negative: 1 if the statement was negated\n- score: salience score of the supervised scoring model\n- local sigma: strict conditional probability of observing a (predicate, object) with a specific subject. I.e., a measure of how unique a statement is. E.g., local_sigma(lawyers, defend, serial_killers) = 1, local_sigma(lawyers, make, money) = 0.01, even though both statements have a similar score of 0.99.",
"## Dataset Creation\n\nSee original paper.",
"## Additional Information",
"### Licensing Information\n\nCC-BY 2.0\n\n\n\nRomero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019"
] | [
"TAGS\n#task_categories-question-answering #task_ids-closed-domain-qa #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-cc-by-2.0 #knowledge base #commonsense #region-us \n",
"# Dataset Card for Quasimodo",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Romero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019",
"### Dataset Summary\n\nA commonsense knowledge base constructed automatically from question-answering forums and query logs.",
"### Supported Tasks and Leaderboards\n\nCan be useful for tasks requiring external knowledge such as question answering.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- subject: The subject of the triple\n- predicate: The predicate of the triple\n- object: The object of the triple\n- modality: Modalities associated with the triples with their counts. TBC means the object can be further refined to the listed objects\n- is_negative: 1 if the statement was negated\n- score: salience score of the supervised scoring model\n- local sigma: strict conditional probability of observing a (predicate, object) with a specific subject. I.e., a measure of how unique a statement is. E.g., local_sigma(lawyers, defend, serial_killers) = 1, local_sigma(lawyers, make, money) = 0.01, even though both statements have a similar score of 0.99.",
"## Dataset Creation\n\nSee original paper.",
"## Additional Information",
"### Licensing Information\n\nCC-BY 2.0\n\n\n\nRomero et al., Commonsense Properties from Query Logs and Question Answering Forums, CIKM, 2019"
] |
326a090671e5d16285a76878114dc54704a26e4b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: dslim/bert-large-NER
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@rdecoupes](https://huggingface.co/rdecoupes) for evaluating this model. | autoevaluate/autoeval-eval-conll2003-conll2003-623e8b-1865063750 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-24T14:01:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "dslim/bert-large-NER", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-24T14:03:21+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: dslim/bert-large-NER
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @rdecoupes for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: dslim/bert-large-NER\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @rdecoupes for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: dslim/bert-large-NER\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @rdecoupes for evaluating this model."
] |
e7f950d67b7cae3e03abd83c243f2933e1823bb5 | # Large Labelled Logo Dataset | LHF/l3d | [
"region:us"
] | 2022-10-24T14:31:20+00:00 | {} | 2023-01-02T19:41:27+00:00 | [] | [] | TAGS
#region-us
| # Large Labelled Logo Dataset | [
"# Large Labelled Logo Dataset"
] | [
"TAGS\n#region-us \n",
"# Large Labelled Logo Dataset"
] |
48f363dd35ced1e473e9efdf11e55046145d4ba8 | This repo contains all the docs published on https://huggingface.co/docs.
The docs are generated with https://github.com/huggingface/doc-builder. | hf-doc-build/doc-build | [
"license:mit",
"region:us"
] | 2022-10-24T14:39:05+00:00 | {"license": "mit", "pretty_name": "Generated Docs for HF"} | 2024-02-17T00:41:19+00:00 | [] | [] | TAGS
#license-mit #region-us
| This repo contains all the docs published on URL
The docs are generated with URL | [] | [
"TAGS\n#license-mit #region-us \n"
] |
b00dc249a422f746fa6f3fe520e9dc1948b827f1 |
# Flame Surge Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by flame_surge_style"```
If it is to strong just add [] around it.
Trained until 15000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 15k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/GwRM6jf.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/vueZJGB.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/GnscYKw.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/VOyrp21.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/KlpeUpB.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/flame_surge_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-24T18:18:40+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-24T18:39:09+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Flame Surge Style Embedding / Textual Inversion
===============================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 15000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the from the file name and replace the 15k steps ver in your folder
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
0fe3d57b821a925081220f954b454f10ace87af8 |
### Dataset Contents
This dataset contains the concatenated scripts from the original (and best) Star Wars trilogy. The scripts are reduced to dialogue only, and are tagged with a line number and speaker.
### Dataset Disclaimer
I don't own this data; or Star Wars. But it would be cool if I did.
Star Wars is owned by Lucasfilms. I do not own any of the rights to this information.
The scripts are derived from a couple sources:
* This [GitHub Repo](https://github.com/gastonstat/StarWars) with raw files
* A [Kaggle Dataset](https://www.kaggle.com/datasets/xvivancos/star-wars-movie-scripts) put together by whoever 'Xavier' is
### May the Force be with you | andrewkroening/Star-wars-scripts-dialogue-IV-VI | [
"license:cc",
"region:us"
] | 2022-10-24T18:31:55+00:00 | {"license": "cc"} | 2022-10-27T16:53:39+00:00 | [] | [] | TAGS
#license-cc #region-us
|
### Dataset Contents
This dataset contains the concatenated scripts from the original (and best) Star Wars trilogy. The scripts are reduced to dialogue only, and are tagged with a line number and speaker.
### Dataset Disclaimer
I don't own this data; or Star Wars. But it would be cool if I did.
Star Wars is owned by Lucasfilms. I do not own any of the rights to this information.
The scripts are derived from a couple sources:
* This GitHub Repo with raw files
* A Kaggle Dataset put together by whoever 'Xavier' is
### May the Force be with you | [
"### Dataset Contents\n\nThis dataset contains the concatenated scripts from the original (and best) Star Wars trilogy. The scripts are reduced to dialogue only, and are tagged with a line number and speaker.",
"### Dataset Disclaimer\n\nI don't own this data; or Star Wars. But it would be cool if I did.\n\nStar Wars is owned by Lucasfilms. I do not own any of the rights to this information.\n\nThe scripts are derived from a couple sources:\n\n* This GitHub Repo with raw files\n\n* A Kaggle Dataset put together by whoever 'Xavier' is",
"### May the Force be with you"
] | [
"TAGS\n#license-cc #region-us \n",
"### Dataset Contents\n\nThis dataset contains the concatenated scripts from the original (and best) Star Wars trilogy. The scripts are reduced to dialogue only, and are tagged with a line number and speaker.",
"### Dataset Disclaimer\n\nI don't own this data; or Star Wars. But it would be cool if I did.\n\nStar Wars is owned by Lucasfilms. I do not own any of the rights to this information.\n\nThe scripts are derived from a couple sources:\n\n* This GitHub Repo with raw files\n\n* A Kaggle Dataset put together by whoever 'Xavier' is",
"### May the Force be with you"
] |
61f49d80d69c6208a9bfffb1cab4b98c9a9accf8 |
# Literature Dataset
## Files
A dataset containing novels, epics and essays.
The files are as follows:
- main.txt, a file with all the texts, every text on a newline, all English
- vocab.txt, a file with the trained (BERT) vocab, a newline a new word
- train.csv, a file with length 129 sequences of tokens, csv of ints, containing 48,758 samples (6,289,782 tokens)
- test.csv, the test split in the same way, 5,417 samples (698,793 tokens)
- DatasetDistribution.png, a file with all the texts and a plot with character length
## Texts
The texts used are these:
- Wuthering Heights
- Ulysses
- Treasure Island
- The War of the Worlds
- The Republic
- The Prophet
- The Prince
- The Picture of Dorian Gray
- The Odyssey
- The Great Gatsby
- The Brothers Karamazov
- Second Treatise of Goverment
- Pride and Prejudice
- Peter Pan
- Moby Dick
- Metamorphosis
- Little Women
- Les Misérables
- Japanese Girls and Women
- Iliad
- Heart of Darkness
- Grimms' Fairy Tales
- Great Expectations
- Frankenstein
- Emma
- Dracula
- Don Quixote
- Crime and Punishment
- Christmas Carol
- Beyond Good and Evil
- Anna Karenina
- Adventures of Sherlock Holmes
- Adventures of Huckleberry Finn
- Adventures in Wonderland
- A Tale of Two Cities
- A Room with A View | ACOSharma/literature | [
"license:cc-by-sa-4.0",
"region:us"
] | 2022-10-24T20:56:25+00:00 | {"license": "cc-by-sa-4.0"} | 2022-10-28T14:38:43+00:00 | [] | [] | TAGS
#license-cc-by-sa-4.0 #region-us
|
# Literature Dataset
## Files
A dataset containing novels, epics and essays.
The files are as follows:
- URL, a file with all the texts, every text on a newline, all English
- URL, a file with the trained (BERT) vocab, a newline a new word
- URL, a file with length 129 sequences of tokens, csv of ints, containing 48,758 samples (6,289,782 tokens)
- URL, the test split in the same way, 5,417 samples (698,793 tokens)
- URL, a file with all the texts and a plot with character length
## Texts
The texts used are these:
- Wuthering Heights
- Ulysses
- Treasure Island
- The War of the Worlds
- The Republic
- The Prophet
- The Prince
- The Picture of Dorian Gray
- The Odyssey
- The Great Gatsby
- The Brothers Karamazov
- Second Treatise of Goverment
- Pride and Prejudice
- Peter Pan
- Moby Dick
- Metamorphosis
- Little Women
- Les Misérables
- Japanese Girls and Women
- Iliad
- Heart of Darkness
- Grimms' Fairy Tales
- Great Expectations
- Frankenstein
- Emma
- Dracula
- Don Quixote
- Crime and Punishment
- Christmas Carol
- Beyond Good and Evil
- Anna Karenina
- Adventures of Sherlock Holmes
- Adventures of Huckleberry Finn
- Adventures in Wonderland
- A Tale of Two Cities
- A Room with A View | [
"# Literature Dataset",
"## Files\nA dataset containing novels, epics and essays.\nThe files are as follows:\n - URL, a file with all the texts, every text on a newline, all English\n - URL, a file with the trained (BERT) vocab, a newline a new word\n - URL, a file with length 129 sequences of tokens, csv of ints, containing 48,758 samples (6,289,782 tokens)\n - URL, the test split in the same way, 5,417 samples (698,793 tokens)\n - URL, a file with all the texts and a plot with character length",
"## Texts\nThe texts used are these:\n - Wuthering Heights\n - Ulysses\n - Treasure Island\n - The War of the Worlds\n - The Republic\n - The Prophet\n - The Prince\n - The Picture of Dorian Gray\n - The Odyssey\n - The Great Gatsby\n - The Brothers Karamazov\n - Second Treatise of Goverment\n - Pride and Prejudice\n - Peter Pan\n - Moby Dick\n - Metamorphosis\n - Little Women\n - Les Misérables\n - Japanese Girls and Women\n - Iliad\n - Heart of Darkness\n - Grimms' Fairy Tales\n - Great Expectations\n - Frankenstein\n - Emma\n - Dracula\n - Don Quixote\n - Crime and Punishment\n - Christmas Carol\n - Beyond Good and Evil\n - Anna Karenina\n - Adventures of Sherlock Holmes\n - Adventures of Huckleberry Finn\n - Adventures in Wonderland\n - A Tale of Two Cities\n - A Room with A View"
] | [
"TAGS\n#license-cc-by-sa-4.0 #region-us \n",
"# Literature Dataset",
"## Files\nA dataset containing novels, epics and essays.\nThe files are as follows:\n - URL, a file with all the texts, every text on a newline, all English\n - URL, a file with the trained (BERT) vocab, a newline a new word\n - URL, a file with length 129 sequences of tokens, csv of ints, containing 48,758 samples (6,289,782 tokens)\n - URL, the test split in the same way, 5,417 samples (698,793 tokens)\n - URL, a file with all the texts and a plot with character length",
"## Texts\nThe texts used are these:\n - Wuthering Heights\n - Ulysses\n - Treasure Island\n - The War of the Worlds\n - The Republic\n - The Prophet\n - The Prince\n - The Picture of Dorian Gray\n - The Odyssey\n - The Great Gatsby\n - The Brothers Karamazov\n - Second Treatise of Goverment\n - Pride and Prejudice\n - Peter Pan\n - Moby Dick\n - Metamorphosis\n - Little Women\n - Les Misérables\n - Japanese Girls and Women\n - Iliad\n - Heart of Darkness\n - Grimms' Fairy Tales\n - Great Expectations\n - Frankenstein\n - Emma\n - Dracula\n - Don Quixote\n - Crime and Punishment\n - Christmas Carol\n - Beyond Good and Evil\n - Anna Karenina\n - Adventures of Sherlock Holmes\n - Adventures of Huckleberry Finn\n - Adventures in Wonderland\n - A Tale of Two Cities\n - A Room with A View"
] |
fb620fbe49fa4420e0734bd9c0df11f51176b61f |
# DiffusionDB
<img width="100%" src="https://user-images.githubusercontent.com/15007159/201762588-f24db2b8-dbb2-4a94-947b-7de393fc3d33.gif">
## Table of Contents
- [DiffusionDB](#diffusiondb)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Two Subsets](#two-subsets)
- [Key Differences](#key-differences)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Metadata](#dataset-metadata)
- [Metadata Schema](#metadata-schema)
- [Data Splits](#data-splits)
- [Loading Data Subsets](#loading-data-subsets)
- [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
- [Method 2. Use the PoloClub Downloader](#method-2-use-the-poloclub-downloader)
- [Usage/Examples](#usageexamples)
- [Downloading a single file](#downloading-a-single-file)
- [Downloading a range of files](#downloading-a-range-of-files)
- [Downloading to a specific directory](#downloading-to-a-specific-directory)
- [Setting the files to unzip once they've been downloaded](#setting-the-files-to-unzip-once-theyve-been-downloaded)
- [Method 3. Use `metadata.parquet` (Text Only)](#method-3-use-metadataparquet-text-only)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb)
- **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
- **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb)
- **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
- **Point of Contact:** [Jay Wang](mailto:[email protected])
### Dataset Summary
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co/datasets/poloclub/diffusiondb).
### Supported Tasks and Leaderboards
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
### Languages
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
### Two Subsets
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|:--|--:|--:|--:|--:|--:|
|DiffusionDB 2M|2M|1.5M|1.6TB|`images/`|`metadata.parquet`|
|DiffusionDB Large|14M|1.8M|6.5TB|`diffusiondb-large-part-1/` `diffusiondb-large-part-2/`|`metadata-large.parquet`|
##### Key Differences
1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
2. Images in DiffusionDB 2M are stored in `png` format; images in DiffusionDB Large use a lossless `webp` format.
## Dataset Structure
We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.
```bash
# DiffusionDB 2M
./
├── images
│ ├── part-000001
│ │ ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png
│ │ ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png
│ │ ├── 66b428b9-55dc-4907-b116-55aaa887de30.png
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-002000
└── metadata.parquet
```
```bash
# DiffusionDB Large
./
├── diffusiondb-large-part-1
│ ├── part-000001
│ │ ├── 0a8dc864-1616-4961-ac18-3fcdf76d3b08.webp
│ │ ├── 0a25cacb-5d91-4f27-b18a-bd423762f811.webp
│ │ ├── 0a52d584-4211-43a0-99ef-f5640ee2fc8c.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-010000
├── diffusiondb-large-part-2
│ ├── part-010001
│ │ ├── 0a68f671-3776-424c-91b6-c09a0dd6fc2d.webp
│ │ ├── 0a0756e9-1249-4fe2-a21a-12c43656c7a3.webp
│ │ ├── 0aa48f3d-f2d9-40a8-a800-c2c651ebba06.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-010002
│ ├── part-010003
│ ├── [...]
│ └── part-014000
└── metadata-large.parquet
```
These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB 2M) or a lossless `WebP` file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
### Data Instances
For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` and its key-value pair in `part-000001.json`.
<img width="300" src="https://i.imgur.com/gqWcRs2.png">
```json
{
"f3501e05-aef7-4225-a9e9-f516527408ac.png": {
"p": "geodesic landscape, john chamberlain, christopher balaskas, tadao ando, 4 k, ",
"se": 38753269,
"c": 12.0,
"st": 50,
"sa": "k_lms"
},
}
```
### Data Fields
- key: Unique image name
- `p`: Prompt
- `se`: Random seed
- `c`: CFG Scale (guidance scale)
- `st`: Steps
- `sa`: Sampler
### Dataset Metadata
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables `metadata.parquet` and `metadata-large.parquet` for DiffusionDB 2M and DiffusionDB Large, respectively.
The shape of `metadata.parquet` is (2000000, 13) and the shape of `metatable-large.parquet` is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
Below are three random rows from `metadata.parquet`.
| image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw |
|:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:|
| 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 |
| a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 |
| 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 |
#### Metadata Schema
`metadata.parquet` and `metatable-large.parquet` share the same schema.
|Column|Type|Description|
|:---|:---|:---|
|`image_name`|`string`|Image UUID filename.|
|`prompt`|`string`|The text prompt used to generate this image.|
|`part_id`|`uint16`|Folder ID of this image.|
|`seed`|`uint32`| Random seed used to generate this image.|
|`step`|`uint16`| Step count (hyperparameter).|
|`cfg`|`float32`| Guidance scale (hyperparameter).|
|`sampler`|`uint8`| Sampler method (hyperparameter). Mapping: `{1: "ddim", 2: "plms", 3: "k_euler", 4: "k_euler_ancestral", 5: "k_heun", 6: "k_dpm_2", 7: "k_dpm_2_ancestral", 8: "k_lms", 9: "others"}`.
|`width`|`uint16`|Image width.|
|`height`|`uint16`|Image height.|
|`user_name`|`string`|The unique discord ID's SHA256 hash of the user who generated this image. For example, the hash for `xiaohk#3146` is `e285b7ef63be99e9107cecd79b280bde602f17e0ca8363cb7a0889b67f0b5ed0`. "deleted_account" refer to users who have deleted their accounts. None means the image has been deleted before we scrape it for the second time.|
|`timestamp`|`timestamp`|UTC Timestamp when this image was generated. None means the image has been deleted before we scrape it for the second time. Note that timestamp is not accurate for duplicate images that have the same prompt, hypareparameters, width, height.|
|`image_nsfw`|`float32`|Likelihood of an image being NSFW. Scores are predicted by [LAION's state-of-art NSFW detector](https://github.com/LAION-AI/LAION-SAFETY) (range from 0 to 1). A score of 2.0 means the image has already been flagged as NSFW and blurred by Stable Diffusion.|
|`prompt_nsfw`|`float32`|Likelihood of a prompt being NSFW. Scores are predicted by the library [Detoxicy](https://github.com/unitaryai/detoxify). Each score represents the maximum of `toxicity` and `sexual_explicit` (range from 0 to 1).|
> **Warning**
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
<img src="https://i.imgur.com/1RiGAXL.png" width="100%">
### Data Splits
For DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.
### Loading Data Subsets
DiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
#### Method 1: Using Hugging Face Datasets Loader
You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co/datasets/poloclub/diffusiondb/viewer/all/train).
```python
import numpy as np
from datasets import load_dataset
# Load the dataset with the `large_random_1k` subset
dataset = load_dataset('poloclub/diffusiondb', 'large_random_1k')
```
#### Method 2. Use the PoloClub Downloader
This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
##### Usage/Examples
The script is run using command-line arguments as follows:
- `-i` `--index` - File to download or lower bound of a range of files if `-r` is also set.
- `-r` `--range` - Upper bound of range of files to download if `-i` is set.
- `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set.
- `-z` `--unzip` - Unzip the file/files after downloading
- `-l` `--large` - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.
###### Downloading a single file
The specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.
```bash
python download.py -i 23
```
###### Downloading a range of files
The upper and lower bounds of the set of files to download are set by the `-i` and `-r` flags respectively.
```bash
python download.py -i 1 -r 2000
```
Note that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.
###### Downloading to a specific directory
The script will default to the location of the dataset's `part` .zip files at `images/`. If you wish to move the download location, you should move these files as well or use a symbolic link.
```bash
python download.py -i 1 -r 2000 -o /home/$USER/datahoarding/etc
```
Again, the script will automatically add the `/` between the directory and the file when it downloads.
###### Setting the files to unzip once they've been downloaded
The script is set to unzip the files _after_ all files have downloaded as both can be lengthy processes in certain circumstances.
```bash
python download.py -i 1 -r 2000 -z
```
#### Method 3. Use `metadata.parquet` (Text Only)
If your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the `metadata.parquet` table.
```python
from urllib.request import urlretrieve
import pandas as pd
# Download the parquet table
table_url = f'https://huggingface.co/datasets/poloclub/diffusiondb/resolve/main/metadata.parquet'
urlretrieve(table_url, 'metadata.parquet')
# Read the table using Pandas
metadata_df = pd.read_parquet('metadata.parquet')
```
## Dataset Creation
### Curation Rationale
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.
### Source Data
#### Initial Data Collection and Normalization
We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.
#### Who are the source language producers?
The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the discord usernames from the dataset.
We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better understanding of large text-to-image generative models.
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.
### Discussion of Biases
The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
### Other Known Limitations
**Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.
Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.
## Additional Information
### Dataset Curators
DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/).
### Licensing Information
The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```bibtex
@article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:2210.14896 [cs]},
url = {https://arxiv.org/abs/2210.14896}
}
```
### Contributions
If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact [Jay Wang](https://zijie.wang).
| poloclub/diffusiondb | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:n>1T",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"stable diffusion",
"prompt engineering",
"prompts",
"research paper",
"arxiv:2210.14896",
"region:us"
] | 2022-10-25T01:25:28+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["n>1T"], "source_datasets": ["original"], "task_categories": ["text-to-image", "image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "DiffusionDB", "layout": "default", "title": "Home", "nav_order": 1, "has_children": false, "tags": ["stable diffusion", "prompt engineering", "prompts", "research paper"]} | 2024-01-22T22:17:47+00:00 | [
"2210.14896"
] | [
"en"
] | TAGS
#task_categories-text-to-image #task_categories-image-to-text #task_ids-image-captioning #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-n>1T #source_datasets-original #language-English #license-cc0-1.0 #stable diffusion #prompt engineering #prompts #research paper #arxiv-2210.14896 #region-us
| DiffusionDB
===========
<img width="100%" src="URL
Table of Contents
-----------------
* DiffusionDB
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Two Subsets
* Key Differences
+ Dataset Structure
- Data Instances
- Data Fields
- Dataset Metadata
* Metadata Schema
- Data Splits
- Loading Data Subsets
* Method 1: Using Hugging Face Datasets Loader
* Method 2. Use the PoloClub Downloader
+ Usage/Examples
- Downloading a single file
- Downloading a range of files
- Downloading to a specific directory
- Setting the files to unzip once they've been downloaded
* Method 3. Use 'metadata.parquet' (Text Only)
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: DiffusionDB homepage
* Repository: DiffusionDB repository
* Distribution: DiffusionDB Hugging Face Dataset
* Paper: DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models
* Point of Contact: Jay Wang
### Dataset Summary
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains 14 million images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
DiffusionDB is publicly available at Hugging Face Dataset.
### Supported Tasks and Leaderboards
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
### Languages
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
### Two Subsets
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.
##### Key Differences
1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
2. Images in DiffusionDB 2M are stored in 'png' format; images in DiffusionDB Large use a lossless 'webp' format.
Dataset Structure
-----------------
We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.
These sub-folders have names 'part-0xxxxx', and each image has a unique name generated by UUID Version 4. The JSON file in a sub-folder has the same name as the sub-folder. Each image is a 'PNG' file (DiffusionDB 2M) or a lossless 'WebP' file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
### Data Instances
For example, below is the image of 'URL' and its key-value pair in 'URL'.
<img width="300" src="https://i.URL
### Data Fields
* key: Unique image name
* 'p': Prompt
* 'se': Random seed
* 'c': CFG Scale (guidance scale)
* 'st': Steps
* 'sa': Sampler
### Dataset Metadata
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables 'metadata.parquet' and 'metadata-large.parquet' for DiffusionDB 2M and DiffusionDB Large, respectively.
The shape of 'metadata.parquet' is (2000000, 13) and the shape of 'metatable-large.parquet' is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
Below are three random rows from 'metadata.parquet'.
#### Metadata Schema
'metadata.parquet' and 'metatable-large.parquet' share the same schema.
>
> Warning
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
>
>
>
<img src="https://i.URL width="100%">
### Data Splits
For DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.
### Loading Data Subsets
DiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the 'URL' notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
#### Method 1: Using Hugging Face Datasets Loader
You can use the Hugging Face 'Datasets' library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the Dataset Preview.
#### Method 2. Use the PoloClub Downloader
This repo includes a Python downloader 'URL' that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
##### Usage/Examples
The script is run using command-line arguments as follows:
* '-i' '--index' - File to download or lower bound of a range of files if '-r' is also set.
* '-r' '--range' - Upper bound of range of files to download if '-i' is set.
* '-o' '--output' - Name of custom output directory. Defaults to the current directory if not set.
* '-z' '--unzip' - Unzip the file/files after downloading
* '-l' '--large' - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.
###### Downloading a single file
The specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.
###### Downloading a range of files
The upper and lower bounds of the set of files to download are set by the '-i' and '-r' flags respectively.
Note that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.
###### Downloading to a specific directory
The script will default to the location of the dataset's 'part' .zip files at 'images/'. If you wish to move the download location, you should move these files as well or use a symbolic link.
Again, the script will automatically add the '/' between the directory and the file when it downloads.
###### Setting the files to unzip once they've been downloaded
The script is set to unzip the files *after* all files have downloaded as both can be lengthy processes in certain circumstances.
#### Method 3. Use 'metadata.parquet' (Text Only)
If your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the 'metadata.parquet' table.
Dataset Creation
----------------
### Curation Rationale
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.
### Source Data
#### Initial Data Collection and Normalization
We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official Stable Diffusion Discord server because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.
#### Who are the source language producers?
The language producers are users of the official Stable Diffusion Discord server.
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the discord usernames from the dataset.
We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset is to help develop better understanding of large text-to-image generative models.
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a Google Form on the DiffusionDB website where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.
### Discussion of Biases
The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
### Other Known Limitations
Generalizability. Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.
Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.
Additional Information
----------------------
### Dataset Curators
DiffusionDB is created by Jay Wang, Evan Montoya, David Munechika, Alex Yang, Ben Hoover, Polo Chau.
### Licensing Information
The DiffusionDB dataset is available under the CC0 1.0 License.
The Python code in this repository is available under the MIT License.
### Contributions
If you have any questions, feel free to open an issue or contact Jay Wang.
| [
"### Dataset Summary\n\n\nDiffusionDB is the first large-scale text-to-image prompt dataset. It contains 14 million images generated by Stable Diffusion using prompts and hyperparameters specified by real users.\n\n\nDiffusionDB is publicly available at Hugging Face Dataset.",
"### Supported Tasks and Leaderboards\n\n\nThe unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.",
"### Languages\n\n\nThe text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.",
"### Two Subsets\n\n\nDiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.",
"##### Key Differences\n\n\n1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.\n2. Images in DiffusionDB 2M are stored in 'png' format; images in DiffusionDB Large use a lossless 'webp' format.\n\n\nDataset Structure\n-----------------\n\n\nWe use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.\n\n\nThese sub-folders have names 'part-0xxxxx', and each image has a unique name generated by UUID Version 4. The JSON file in a sub-folder has the same name as the sub-folder. Each image is a 'PNG' file (DiffusionDB 2M) or a lossless 'WebP' file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.",
"### Data Instances\n\n\nFor example, below is the image of 'URL' and its key-value pair in 'URL'.\n\n\n<img width=\"300\" src=\"https://i.URL",
"### Data Fields\n\n\n* key: Unique image name\n* 'p': Prompt\n* 'se': Random seed\n* 'c': CFG Scale (guidance scale)\n* 'st': Steps\n* 'sa': Sampler",
"### Dataset Metadata\n\n\nTo help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables 'metadata.parquet' and 'metadata-large.parquet' for DiffusionDB 2M and DiffusionDB Large, respectively.\n\n\nThe shape of 'metadata.parquet' is (2000000, 13) and the shape of 'metatable-large.parquet' is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.\n\n\nBelow are three random rows from 'metadata.parquet'.",
"#### Metadata Schema\n\n\n'metadata.parquet' and 'metatable-large.parquet' share the same schema.\n\n\n\n\n> \n> Warning\n> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.\n> \n> \n> \n\n\n<img src=\"https://i.URL width=\"100%\">",
"### Data Splits\n\n\nFor DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.",
"### Loading Data Subsets\n\n\nDiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the 'URL' notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.",
"#### Method 1: Using Hugging Face Datasets Loader\n\n\nYou can use the Hugging Face 'Datasets' library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the Dataset Preview.",
"#### Method 2. Use the PoloClub Downloader\n\n\nThis repo includes a Python downloader 'URL' that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.",
"##### Usage/Examples\n\n\nThe script is run using command-line arguments as follows:\n\n\n* '-i' '--index' - File to download or lower bound of a range of files if '-r' is also set.\n* '-r' '--range' - Upper bound of range of files to download if '-i' is set.\n* '-o' '--output' - Name of custom output directory. Defaults to the current directory if not set.\n* '-z' '--unzip' - Unzip the file/files after downloading\n* '-l' '--large' - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.",
"###### Downloading a single file\n\n\nThe specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.",
"###### Downloading a range of files\n\n\nThe upper and lower bounds of the set of files to download are set by the '-i' and '-r' flags respectively.\n\n\nNote that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.",
"###### Downloading to a specific directory\n\n\nThe script will default to the location of the dataset's 'part' .zip files at 'images/'. If you wish to move the download location, you should move these files as well or use a symbolic link.\n\n\nAgain, the script will automatically add the '/' between the directory and the file when it downloads.",
"###### Setting the files to unzip once they've been downloaded\n\n\nThe script is set to unzip the files *after* all files have downloaded as both can be lengthy processes in certain circumstances.",
"#### Method 3. Use 'metadata.parquet' (Text Only)\n\n\nIf your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the 'metadata.parquet' table.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nRecent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.\n\n\nHowever, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.\n\n\nPrompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.\nTo help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nWe construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official Stable Diffusion Discord server because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.",
"#### Who are the source language producers?\n\n\nThe language producers are users of the official Stable Diffusion Discord server.",
"### Annotations\n\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\n\n[N/A]",
"#### Who are the annotators?\n\n\n[N/A]",
"### Personal and Sensitive Information\n\n\nThe authors removed the discord usernames from the dataset.\nWe decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop better understanding of large text-to-image generative models.\nThe unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.\n\n\nIt should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a Google Form on the DiffusionDB website where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.",
"### Discussion of Biases\n\n\nThe 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.",
"### Other Known Limitations\n\n\nGeneralizability. Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.\nTherefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDiffusionDB is created by Jay Wang, Evan Montoya, David Munechika, Alex Yang, Ben Hoover, Polo Chau.",
"### Licensing Information\n\n\nThe DiffusionDB dataset is available under the CC0 1.0 License.\nThe Python code in this repository is available under the MIT License.",
"### Contributions\n\n\nIf you have any questions, feel free to open an issue or contact Jay Wang."
] | [
"TAGS\n#task_categories-text-to-image #task_categories-image-to-text #task_ids-image-captioning #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-n>1T #source_datasets-original #language-English #license-cc0-1.0 #stable diffusion #prompt engineering #prompts #research paper #arxiv-2210.14896 #region-us \n",
"### Dataset Summary\n\n\nDiffusionDB is the first large-scale text-to-image prompt dataset. It contains 14 million images generated by Stable Diffusion using prompts and hyperparameters specified by real users.\n\n\nDiffusionDB is publicly available at Hugging Face Dataset.",
"### Supported Tasks and Leaderboards\n\n\nThe unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.",
"### Languages\n\n\nThe text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.",
"### Two Subsets\n\n\nDiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.",
"##### Key Differences\n\n\n1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.\n2. Images in DiffusionDB 2M are stored in 'png' format; images in DiffusionDB Large use a lossless 'webp' format.\n\n\nDataset Structure\n-----------------\n\n\nWe use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.\n\n\nThese sub-folders have names 'part-0xxxxx', and each image has a unique name generated by UUID Version 4. The JSON file in a sub-folder has the same name as the sub-folder. Each image is a 'PNG' file (DiffusionDB 2M) or a lossless 'WebP' file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.",
"### Data Instances\n\n\nFor example, below is the image of 'URL' and its key-value pair in 'URL'.\n\n\n<img width=\"300\" src=\"https://i.URL",
"### Data Fields\n\n\n* key: Unique image name\n* 'p': Prompt\n* 'se': Random seed\n* 'c': CFG Scale (guidance scale)\n* 'st': Steps\n* 'sa': Sampler",
"### Dataset Metadata\n\n\nTo help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables 'metadata.parquet' and 'metadata-large.parquet' for DiffusionDB 2M and DiffusionDB Large, respectively.\n\n\nThe shape of 'metadata.parquet' is (2000000, 13) and the shape of 'metatable-large.parquet' is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.\n\n\nBelow are three random rows from 'metadata.parquet'.",
"#### Metadata Schema\n\n\n'metadata.parquet' and 'metatable-large.parquet' share the same schema.\n\n\n\n\n> \n> Warning\n> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.\n> \n> \n> \n\n\n<img src=\"https://i.URL width=\"100%\">",
"### Data Splits\n\n\nFor DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.",
"### Loading Data Subsets\n\n\nDiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the 'URL' notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.",
"#### Method 1: Using Hugging Face Datasets Loader\n\n\nYou can use the Hugging Face 'Datasets' library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the Dataset Preview.",
"#### Method 2. Use the PoloClub Downloader\n\n\nThis repo includes a Python downloader 'URL' that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.",
"##### Usage/Examples\n\n\nThe script is run using command-line arguments as follows:\n\n\n* '-i' '--index' - File to download or lower bound of a range of files if '-r' is also set.\n* '-r' '--range' - Upper bound of range of files to download if '-i' is set.\n* '-o' '--output' - Name of custom output directory. Defaults to the current directory if not set.\n* '-z' '--unzip' - Unzip the file/files after downloading\n* '-l' '--large' - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.",
"###### Downloading a single file\n\n\nThe specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.",
"###### Downloading a range of files\n\n\nThe upper and lower bounds of the set of files to download are set by the '-i' and '-r' flags respectively.\n\n\nNote that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.",
"###### Downloading to a specific directory\n\n\nThe script will default to the location of the dataset's 'part' .zip files at 'images/'. If you wish to move the download location, you should move these files as well or use a symbolic link.\n\n\nAgain, the script will automatically add the '/' between the directory and the file when it downloads.",
"###### Setting the files to unzip once they've been downloaded\n\n\nThe script is set to unzip the files *after* all files have downloaded as both can be lengthy processes in certain circumstances.",
"#### Method 3. Use 'metadata.parquet' (Text Only)\n\n\nIf your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the 'metadata.parquet' table.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nRecent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.\n\n\nHowever, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.\n\n\nPrompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.\nTo help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nWe construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official Stable Diffusion Discord server because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.",
"#### Who are the source language producers?\n\n\nThe language producers are users of the official Stable Diffusion Discord server.",
"### Annotations\n\n\nThe dataset does not contain any additional annotations.",
"#### Annotation process\n\n\n[N/A]",
"#### Who are the annotators?\n\n\n[N/A]",
"### Personal and Sensitive Information\n\n\nThe authors removed the discord usernames from the dataset.\nWe decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop better understanding of large text-to-image generative models.\nThe unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.\n\n\nIt should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a Google Form on the DiffusionDB website where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.",
"### Discussion of Biases\n\n\nThe 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.",
"### Other Known Limitations\n\n\nGeneralizability. Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.\nTherefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDiffusionDB is created by Jay Wang, Evan Montoya, David Munechika, Alex Yang, Ben Hoover, Polo Chau.",
"### Licensing Information\n\n\nThe DiffusionDB dataset is available under the CC0 1.0 License.\nThe Python code in this repository is available under the MIT License.",
"### Contributions\n\n\nIf you have any questions, feel free to open an issue or contact Jay Wang."
] |
37b04e9237bdfaba2f149f437f104f63a6d4f25a | # Dataset Card for "eraser_cose"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | niurl/eraser_cose | [
"region:us"
] | 2022-10-25T02:21:49+00:00 | {"dataset_info": {"features": [{"name": "doc_id", "dtype": "string"}, {"name": "question", "sequence": "string"}, {"name": "query", "dtype": "string"}, {"name": "evidence_span", "sequence": {"sequence": "int64"}}, {"name": "classification", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 282071, "num_examples": 1079}, {"name": "train", "num_bytes": 2316094, "num_examples": 8752}, {"name": "val", "num_bytes": 288029, "num_examples": 1086}], "download_size": 1212369, "dataset_size": 2886194}} | 2022-10-25T02:22:37+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "eraser_cose"
More Information needed | [
"# Dataset Card for \"eraser_cose\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"eraser_cose\"\n\nMore Information needed"
] |
2192eb5fc49e5dda28d7e3ea9aa4cd35ab00ef5b |
# Dataset Card for COPA-SSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/a-brassard/copa-sse
- **Paper:** [COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning](https://arxiv.org/abs/2201.06777)
- **Point of Contact:** [Ana Brassard](mailto:[email protected])
### Dataset Summary

COPA-SSE contains crowdsourced explanations for the [Balanced COPA](https://balanced-copa.github.io/) dataset, a variant of the [Choice of Plausible Alternatives (COPA)](https://people.ict.usc.edu/~gordon/copa.html) benchmark. The explanations are formatted as a set of triple-like common sense statements with [ConceptNet](https://conceptnet.io/) relations but freely written concepts.
### Supported Tasks and Leaderboards
Can be used to train a model for explain+predict or predict+explain settings. Suited for both text-based and graph-based architectures. Base task is COPA (causal QA).
### Languages
English
## Dataset Structure
### Data Instances
Validation and test set each contains Balanced COPA samples with added explanations in `.jsonl` format. The question ids match the original questions of the Balanced COPA validation and test sets, respectively.
### Data Fields
Each entry contains:
- the original question (matching format and ids)
- `human-explanations`: a list of explanations each containing:
- `expl-id`: the explanation id
- `text`: the explanation in plain text (full sentences)
- `worker-id`: anonymized worker id (the author of the explanation)
- `worker-avg`: the average score the author got for their explanations
- `all-ratings`: all collected ratings for the explanation
- `filtered-ratings`: ratings excluding those that failed the control
- `triples`: the triple-form explanation (a list of ConceptNet-like triples)
Example entry:
```
id: 1,
asks-for: cause,
most-plausible-alternative: 1,
p: "My body cast a shadow over the grass.",
a1: "The sun was rising.",
a2: "The grass was cut.",
human-explanations: [
{expl-id: f4d9b407-681b-4340-9be1-ac044f1c2230,
text: "Sunrise causes casted shadows.",
worker-id: 3a71407b-9431-49f9-b3ca-1641f7c05f3b,
worker-avg: 3.5832864694635025,
all-ratings: [1, 3, 3, 4, 3],
filtered-ratings: [3, 3, 4, 3],
filtered-avg-rating: 3.25,
triples: [["sunrise", "Causes", "casted shadows"]]
}, ...]
```
### Data Splits
Follows original Balanced COPA split: 1000 dev and 500 test instances. Each instance has up to nine explanations.
## Dataset Creation
### Curation Rationale
The goal was to collect human-written explanations to supplement an existing commonsense reasoning benchmark. The triple-like format was designed to support graph-based models and increase the overall data quality, the latter being notoriously lacking in freely-written crowdsourced text.
### Source Data
#### Initial Data Collection and Normalization
The explanations in COPA-SSE are fully crowdsourced via the Amazon Mechanical Turk platform. Workers entered explanations by providing one or more concept-relation-concept triples. The explanations were then rated by different annotators with one- to five-star ratings. The final dataset contains explanations with a range of quality ratings. Additional collection rounds guaranteed that each sample has at least one explanation rated 3.5 stars or higher.
#### Who are the source language producers?
The original COPA questions (500 dev+500 test) were initially hand-crafted by experts. Similarly, the additional 500 development samples in Balanced COPA were authored by a small team of NLP researchers. Finally, the added explanations and quality ratings in COPA-SSE were collected with the help of Amazon Mechanical Turk workers who passed initial qualification rounds.
### Annotations
#### Annotation process
Workers were shown a Balanced COPA question, its answer, and a short instructional text. Then, they filled in free-form text fields for head and tail concepts and selected the relation from a drop-down menu with a curated selection of ConceptNet relations. Each explanation was rated by five different workers who were shown the same question and answer with five candidate explanations.
#### Who are the annotators?
The workers were restricted to persons located in the U.S. or G.B., with a HIT approval of 98% or more, and 500 or more approved HITs. Their identity and further personal information are not available.
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
Models trained to output similar explanations as those in COPA-SSE may not necessarily provide convincing or faithful explanations. Researchers should carefully evaluate the resulting explanations before considering any real-world applications.
### Discussion of Biases
COPA questions ask for causes or effects of everyday actions or interactions, some of them containing gendered language. Some explanations may reinforce harmful stereotypes if their reasoning is based on biased assumptions. These biases were not verified during collection.
### Other Known Limitations
The data was originally intended to be explanation *graphs*, i.e., hypothetical "ideal" subgraphs of a commonsense knowledge graph. While they can still function as valid natural language explanations, their wording may be at times unnatural to a human and may be better suited for graph-based implementations.
## Additional Information
### Dataset Curators
This work was authored by Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. All are both members of the Riken AIP Natural Language Understanding Team and the Tohoku NLP Lab under Tohoku University.
### Licensing Information
COPA-SSE is released under the [MIT License](https://mit-license.org/).
### Citation Information
```
@InProceedings{copa-sse:LREC2022,
author = {Brassard, Ana and Heinzerling, Benjamin and Kavumba, Pride and Inui, Kentaro},
title = {COPA-SSE: Semi-structured Explanations for Commonsense Reasoning},
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {3994--4000},
url = {https://aclanthology.org/2022.lrec-1.425}
}
```
### Contributions
Thanks to [@a-brassard](https://github.com/a-brassard) for adding this dataset. | anab/copa-sse | [
"task_categories:text2text-generation",
"task_categories:multiple-choice",
"task_ids:explanation-generation",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"commonsense reasoning",
"explanation",
"graph-based reasoning",
"arxiv:2201.06777",
"region:us"
] | 2022-10-25T06:11:33+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text2text-generation", "multiple-choice"], "task_ids": ["explanation-generation"], "pretty_name": "Semi-structured Explanations for Commonsense Reasoning", "tags": ["commonsense reasoning", "explanation", "graph-based reasoning"]} | 2022-10-26T00:53:17+00:00 | [
"2201.06777"
] | [
"en"
] | TAGS
#task_categories-text2text-generation #task_categories-multiple-choice #task_ids-explanation-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-mit #commonsense reasoning #explanation #graph-based reasoning #arxiv-2201.06777 #region-us
|
# Dataset Card for COPA-SSE
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Paper: COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning
- Point of Contact: Ana Brassard
### Dataset Summary
!Crowdsourcing protocol
COPA-SSE contains crowdsourced explanations for the Balanced COPA dataset, a variant of the Choice of Plausible Alternatives (COPA) benchmark. The explanations are formatted as a set of triple-like common sense statements with ConceptNet relations but freely written concepts.
### Supported Tasks and Leaderboards
Can be used to train a model for explain+predict or predict+explain settings. Suited for both text-based and graph-based architectures. Base task is COPA (causal QA).
### Languages
English
## Dataset Structure
### Data Instances
Validation and test set each contains Balanced COPA samples with added explanations in '.jsonl' format. The question ids match the original questions of the Balanced COPA validation and test sets, respectively.
### Data Fields
Each entry contains:
- the original question (matching format and ids)
- 'human-explanations': a list of explanations each containing:
- 'expl-id': the explanation id
- 'text': the explanation in plain text (full sentences)
- 'worker-id': anonymized worker id (the author of the explanation)
- 'worker-avg': the average score the author got for their explanations
- 'all-ratings': all collected ratings for the explanation
- 'filtered-ratings': ratings excluding those that failed the control
- 'triples': the triple-form explanation (a list of ConceptNet-like triples)
Example entry:
### Data Splits
Follows original Balanced COPA split: 1000 dev and 500 test instances. Each instance has up to nine explanations.
## Dataset Creation
### Curation Rationale
The goal was to collect human-written explanations to supplement an existing commonsense reasoning benchmark. The triple-like format was designed to support graph-based models and increase the overall data quality, the latter being notoriously lacking in freely-written crowdsourced text.
### Source Data
#### Initial Data Collection and Normalization
The explanations in COPA-SSE are fully crowdsourced via the Amazon Mechanical Turk platform. Workers entered explanations by providing one or more concept-relation-concept triples. The explanations were then rated by different annotators with one- to five-star ratings. The final dataset contains explanations with a range of quality ratings. Additional collection rounds guaranteed that each sample has at least one explanation rated 3.5 stars or higher.
#### Who are the source language producers?
The original COPA questions (500 dev+500 test) were initially hand-crafted by experts. Similarly, the additional 500 development samples in Balanced COPA were authored by a small team of NLP researchers. Finally, the added explanations and quality ratings in COPA-SSE were collected with the help of Amazon Mechanical Turk workers who passed initial qualification rounds.
### Annotations
#### Annotation process
Workers were shown a Balanced COPA question, its answer, and a short instructional text. Then, they filled in free-form text fields for head and tail concepts and selected the relation from a drop-down menu with a curated selection of ConceptNet relations. Each explanation was rated by five different workers who were shown the same question and answer with five candidate explanations.
#### Who are the annotators?
The workers were restricted to persons located in the U.S. or G.B., with a HIT approval of 98% or more, and 500 or more approved HITs. Their identity and further personal information are not available.
### Personal and Sensitive Information
N/A
## Considerations for Using the Data
### Social Impact of Dataset
Models trained to output similar explanations as those in COPA-SSE may not necessarily provide convincing or faithful explanations. Researchers should carefully evaluate the resulting explanations before considering any real-world applications.
### Discussion of Biases
COPA questions ask for causes or effects of everyday actions or interactions, some of them containing gendered language. Some explanations may reinforce harmful stereotypes if their reasoning is based on biased assumptions. These biases were not verified during collection.
### Other Known Limitations
The data was originally intended to be explanation *graphs*, i.e., hypothetical "ideal" subgraphs of a commonsense knowledge graph. While they can still function as valid natural language explanations, their wording may be at times unnatural to a human and may be better suited for graph-based implementations.
## Additional Information
### Dataset Curators
This work was authored by Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. All are both members of the Riken AIP Natural Language Understanding Team and the Tohoku NLP Lab under Tohoku University.
### Licensing Information
COPA-SSE is released under the MIT License.
### Contributions
Thanks to @a-brassard for adding this dataset. | [
"# Dataset Card for COPA-SSE",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Paper: COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning\n- Point of Contact: Ana Brassard",
"### Dataset Summary\n\n!Crowdsourcing protocol\n\nCOPA-SSE contains crowdsourced explanations for the Balanced COPA dataset, a variant of the Choice of Plausible Alternatives (COPA) benchmark. The explanations are formatted as a set of triple-like common sense statements with ConceptNet relations but freely written concepts.",
"### Supported Tasks and Leaderboards\n\nCan be used to train a model for explain+predict or predict+explain settings. Suited for both text-based and graph-based architectures. Base task is COPA (causal QA).",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nValidation and test set each contains Balanced COPA samples with added explanations in '.jsonl' format. The question ids match the original questions of the Balanced COPA validation and test sets, respectively.",
"### Data Fields\nEach entry contains:\n- the original question (matching format and ids)\n- 'human-explanations': a list of explanations each containing:\n - 'expl-id': the explanation id\n - 'text': the explanation in plain text (full sentences)\n - 'worker-id': anonymized worker id (the author of the explanation) \n - 'worker-avg': the average score the author got for their explanations\n - 'all-ratings': all collected ratings for the explanation\n - 'filtered-ratings': ratings excluding those that failed the control\n - 'triples': the triple-form explanation (a list of ConceptNet-like triples)\n\nExample entry:",
"### Data Splits\n\nFollows original Balanced COPA split: 1000 dev and 500 test instances. Each instance has up to nine explanations.",
"## Dataset Creation",
"### Curation Rationale\n\nThe goal was to collect human-written explanations to supplement an existing commonsense reasoning benchmark. The triple-like format was designed to support graph-based models and increase the overall data quality, the latter being notoriously lacking in freely-written crowdsourced text.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe explanations in COPA-SSE are fully crowdsourced via the Amazon Mechanical Turk platform. Workers entered explanations by providing one or more concept-relation-concept triples. The explanations were then rated by different annotators with one- to five-star ratings. The final dataset contains explanations with a range of quality ratings. Additional collection rounds guaranteed that each sample has at least one explanation rated 3.5 stars or higher.",
"#### Who are the source language producers?\n\nThe original COPA questions (500 dev+500 test) were initially hand-crafted by experts. Similarly, the additional 500 development samples in Balanced COPA were authored by a small team of NLP researchers. Finally, the added explanations and quality ratings in COPA-SSE were collected with the help of Amazon Mechanical Turk workers who passed initial qualification rounds.",
"### Annotations",
"#### Annotation process\n\nWorkers were shown a Balanced COPA question, its answer, and a short instructional text. Then, they filled in free-form text fields for head and tail concepts and selected the relation from a drop-down menu with a curated selection of ConceptNet relations. Each explanation was rated by five different workers who were shown the same question and answer with five candidate explanations.",
"#### Who are the annotators?\n\nThe workers were restricted to persons located in the U.S. or G.B., with a HIT approval of 98% or more, and 500 or more approved HITs. Their identity and further personal information are not available.",
"### Personal and Sensitive Information\n\nN/A",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nModels trained to output similar explanations as those in COPA-SSE may not necessarily provide convincing or faithful explanations. Researchers should carefully evaluate the resulting explanations before considering any real-world applications.",
"### Discussion of Biases\n\nCOPA questions ask for causes or effects of everyday actions or interactions, some of them containing gendered language. Some explanations may reinforce harmful stereotypes if their reasoning is based on biased assumptions. These biases were not verified during collection.",
"### Other Known Limitations\n\nThe data was originally intended to be explanation *graphs*, i.e., hypothetical \"ideal\" subgraphs of a commonsense knowledge graph. While they can still function as valid natural language explanations, their wording may be at times unnatural to a human and may be better suited for graph-based implementations.",
"## Additional Information",
"### Dataset Curators\n\nThis work was authored by Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. All are both members of the Riken AIP Natural Language Understanding Team and the Tohoku NLP Lab under Tohoku University.",
"### Licensing Information\n\nCOPA-SSE is released under the MIT License.",
"### Contributions\n\nThanks to @a-brassard for adding this dataset."
] | [
"TAGS\n#task_categories-text2text-generation #task_categories-multiple-choice #task_ids-explanation-generation #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-mit #commonsense reasoning #explanation #graph-based reasoning #arxiv-2201.06777 #region-us \n",
"# Dataset Card for COPA-SSE",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Paper: COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning\n- Point of Contact: Ana Brassard",
"### Dataset Summary\n\n!Crowdsourcing protocol\n\nCOPA-SSE contains crowdsourced explanations for the Balanced COPA dataset, a variant of the Choice of Plausible Alternatives (COPA) benchmark. The explanations are formatted as a set of triple-like common sense statements with ConceptNet relations but freely written concepts.",
"### Supported Tasks and Leaderboards\n\nCan be used to train a model for explain+predict or predict+explain settings. Suited for both text-based and graph-based architectures. Base task is COPA (causal QA).",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nValidation and test set each contains Balanced COPA samples with added explanations in '.jsonl' format. The question ids match the original questions of the Balanced COPA validation and test sets, respectively.",
"### Data Fields\nEach entry contains:\n- the original question (matching format and ids)\n- 'human-explanations': a list of explanations each containing:\n - 'expl-id': the explanation id\n - 'text': the explanation in plain text (full sentences)\n - 'worker-id': anonymized worker id (the author of the explanation) \n - 'worker-avg': the average score the author got for their explanations\n - 'all-ratings': all collected ratings for the explanation\n - 'filtered-ratings': ratings excluding those that failed the control\n - 'triples': the triple-form explanation (a list of ConceptNet-like triples)\n\nExample entry:",
"### Data Splits\n\nFollows original Balanced COPA split: 1000 dev and 500 test instances. Each instance has up to nine explanations.",
"## Dataset Creation",
"### Curation Rationale\n\nThe goal was to collect human-written explanations to supplement an existing commonsense reasoning benchmark. The triple-like format was designed to support graph-based models and increase the overall data quality, the latter being notoriously lacking in freely-written crowdsourced text.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe explanations in COPA-SSE are fully crowdsourced via the Amazon Mechanical Turk platform. Workers entered explanations by providing one or more concept-relation-concept triples. The explanations were then rated by different annotators with one- to five-star ratings. The final dataset contains explanations with a range of quality ratings. Additional collection rounds guaranteed that each sample has at least one explanation rated 3.5 stars or higher.",
"#### Who are the source language producers?\n\nThe original COPA questions (500 dev+500 test) were initially hand-crafted by experts. Similarly, the additional 500 development samples in Balanced COPA were authored by a small team of NLP researchers. Finally, the added explanations and quality ratings in COPA-SSE were collected with the help of Amazon Mechanical Turk workers who passed initial qualification rounds.",
"### Annotations",
"#### Annotation process\n\nWorkers were shown a Balanced COPA question, its answer, and a short instructional text. Then, they filled in free-form text fields for head and tail concepts and selected the relation from a drop-down menu with a curated selection of ConceptNet relations. Each explanation was rated by five different workers who were shown the same question and answer with five candidate explanations.",
"#### Who are the annotators?\n\nThe workers were restricted to persons located in the U.S. or G.B., with a HIT approval of 98% or more, and 500 or more approved HITs. Their identity and further personal information are not available.",
"### Personal and Sensitive Information\n\nN/A",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nModels trained to output similar explanations as those in COPA-SSE may not necessarily provide convincing or faithful explanations. Researchers should carefully evaluate the resulting explanations before considering any real-world applications.",
"### Discussion of Biases\n\nCOPA questions ask for causes or effects of everyday actions or interactions, some of them containing gendered language. Some explanations may reinforce harmful stereotypes if their reasoning is based on biased assumptions. These biases were not verified during collection.",
"### Other Known Limitations\n\nThe data was originally intended to be explanation *graphs*, i.e., hypothetical \"ideal\" subgraphs of a commonsense knowledge graph. While they can still function as valid natural language explanations, their wording may be at times unnatural to a human and may be better suited for graph-based implementations.",
"## Additional Information",
"### Dataset Curators\n\nThis work was authored by Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. All are both members of the Riken AIP Natural Language Understanding Team and the Tohoku NLP Lab under Tohoku University.",
"### Licensing Information\n\nCOPA-SSE is released under the MIT License.",
"### Contributions\n\nThanks to @a-brassard for adding this dataset."
] |
d2ee25d7fb18334d410a678499a94afede8ec4f4 | # FindZebra corpus
A collection of 30.658 curated articles about rare diseases gathered from GARD, GeneReviews, Genetics Home Reference, OMIM, Orphanet, and Wikipedia. Each article is referenced with a Concept Unique Identifier ([CUI](https://www.nlm.nih.gov/research/umls/new_users/online_learning/Meta_005.html)).
## Preprocessing
The raw HTML content of each article has been processed using the following code (`text` column):
```python
# Preprocessing code
import math
import html2text
parser = html2text.HTML2Text()
parser.ignore_links = True
parser.ignore_images = True
parser.ignore_tables = True
parser.ignore_emphasis = True
parser.body_width = math.inf
parser.body_width = math.inf
article_text = parser.handle(article_html)
``` | findzebra/corpus | [
"region:us"
] | 2022-10-25T07:05:58+00:00 | {} | 2022-10-25T08:58:33+00:00 | [] | [] | TAGS
#region-us
| # FindZebra corpus
A collection of 30.658 curated articles about rare diseases gathered from GARD, GeneReviews, Genetics Home Reference, OMIM, Orphanet, and Wikipedia. Each article is referenced with a Concept Unique Identifier (CUI).
## Preprocessing
The raw HTML content of each article has been processed using the following code ('text' column):
| [
"# FindZebra corpus\n\nA collection of 30.658 curated articles about rare diseases gathered from GARD, GeneReviews, Genetics Home Reference, OMIM, Orphanet, and Wikipedia. Each article is referenced with a Concept Unique Identifier (CUI).",
"## Preprocessing\n\nThe raw HTML content of each article has been processed using the following code ('text' column):"
] | [
"TAGS\n#region-us \n",
"# FindZebra corpus\n\nA collection of 30.658 curated articles about rare diseases gathered from GARD, GeneReviews, Genetics Home Reference, OMIM, Orphanet, and Wikipedia. Each article is referenced with a Concept Unique Identifier (CUI).",
"## Preprocessing\n\nThe raw HTML content of each article has been processed using the following code ('text' column):"
] |
91b1380fc7ff16a970b8b240e56c427b5638087a |
# Lightning Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by lightning_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/HNHRcZg.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/8B31Umz.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/88sHalA.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/WhlLomb.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/a1Usv3u.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/lightning_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-25T08:56:21+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-25T09:05:17+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Lightning Style Embedding / Textual Inversion
=============================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the from the file name and replace the 10k steps ver in your folder
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
8552aab8a6e2bb55739fba702171fd1a4a12d181 | # FindZebra Queries
A set of 248 search queries annotated with the correct diagnosis. The diagnosis is referenced with a Concept Unique Identifier ([CUI](https://www.nlm.nih.gov/research/umls/new_users/online_learning/Meta_005.html)). In a retrieval setting, the task consists of retrieving an article from the [FindZebra corpus](https://huggingface.co/datasets/findzebra/corpus) with a CUI that matches the query CUI. | findzebra/queries | [
"region:us"
] | 2022-10-25T08:58:49+00:00 | {} | 2022-10-25T09:02:34+00:00 | [] | [] | TAGS
#region-us
| # FindZebra Queries
A set of 248 search queries annotated with the correct diagnosis. The diagnosis is referenced with a Concept Unique Identifier (CUI). In a retrieval setting, the task consists of retrieving an article from the FindZebra corpus with a CUI that matches the query CUI. | [
"# FindZebra Queries\n\nA set of 248 search queries annotated with the correct diagnosis. The diagnosis is referenced with a Concept Unique Identifier (CUI). In a retrieval setting, the task consists of retrieving an article from the FindZebra corpus with a CUI that matches the query CUI."
] | [
"TAGS\n#region-us \n",
"# FindZebra Queries\n\nA set of 248 search queries annotated with the correct diagnosis. The diagnosis is referenced with a Concept Unique Identifier (CUI). In a retrieval setting, the task consists of retrieving an article from the FindZebra corpus with a CUI that matches the query CUI."
] |
25700c3e831b26e4224a7c14b226e8cccdf3839f | # Dataset Card for "sv_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juanhebert/sv_corpora_parliament_processed | [
"region:us"
] | 2022-10-25T09:51:07+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 292359009, "num_examples": 1892723}], "download_size": 158940474, "dataset_size": 292359009}} | 2022-11-03T10:21:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sv_corpora_parliament_processed"
More Information needed | [
"# Dataset Card for \"sv_corpora_parliament_processed\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sv_corpora_parliament_processed\"\n\nMore Information needed"
] |
155b325de98e02bb6286fce64282d2c4c30a1b41 | ## Dataset Description
- **Homepage:** https://www.darrow.ai/
- **Repository:** https://github.com/darrow-labs/ClassActionPrediction
- **Paper:** https://arxiv.org/abs/2211.00582
- **Leaderboard:** N/A
- **Point of Contact:** [Gila Hayat](mailto:[email protected])
### Dataset Summary
USClassActions is an English dataset of 200 complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using Darrow.ai proprietary tool.
### Data Instances
```python
from datasets import load_dataset
dataset = load_dataset('darrow-ai/USClassActionOutcomes_ExpertsAnnotations')
```
### Data Fields
`id`: (**int**) a unique identifier of the document \
`origin_label `: (**str**) the outcome of the case \
`target_text`: (**str**) the facts of the case \
`annotator_prediction `: (**str**) annotators predictions of the case outcome based on the target_text \
`annotator_confidence `: (**str**) the annotator's level of confidence in his outcome prediction \
### Curation Rationale
The dataset was curated by Darrow.ai (2022).
### Citation Information
*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*
*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*
*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*
```
@InProceedings{darrow-niklaus-2022-uscp,
author = {Semo, Gil
and Bernsohn, Dor
and Hagag, Ben
and Hayat, Gila
and Niklaus, Joel},
title = {ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US},
booktitle = {Proceedings of the 2022 Natural Legal Language Processing Workshop},
year = {2022},
location = {Abu Dhabi},
}
```
| darrow-ai/USClassActionOutcomes_ExpertsAnnotations | [
"license:gpl-3.0",
"arxiv:2211.00582",
"region:us"
] | 2022-10-25T11:43:36+00:00 | {"license": "gpl-3.0"} | 2022-11-06T12:35:30+00:00 | [
"2211.00582"
] | [] | TAGS
#license-gpl-3.0 #arxiv-2211.00582 #region-us
| ## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: N/A
- Point of Contact: Gila Hayat
### Dataset Summary
USClassActions is an English dataset of 200 complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using URL proprietary tool.
### Data Instances
### Data Fields
'id': (int) a unique identifier of the document \
'origin_label ': (str) the outcome of the case \
'target_text': (str) the facts of the case \
'annotator_prediction ': (str) annotators predictions of the case outcome based on the target_text \
'annotator_confidence ': (str) the annotator's level of confidence in his outcome prediction \
### Curation Rationale
The dataset was curated by URL (2022).
*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*
*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*
*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*
| [
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: N/A\n- Point of Contact: Gila Hayat",
"### Dataset Summary\n\nUSClassActions is an English dataset of 200 complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using URL proprietary tool.",
"### Data Instances",
"### Data Fields\n'id': (int) a unique identifier of the document \\\n'origin_label ': (str) the outcome of the case \\\n'target_text': (str) the facts of the case \\\n'annotator_prediction ': (str) annotators predictions of the case outcome based on the target_text \\\n'annotator_confidence ': (str) the annotator's level of confidence in his outcome prediction \\",
"### Curation Rationale\n\nThe dataset was curated by URL (2022).\n\n\n\n*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*\n*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*\n*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*"
] | [
"TAGS\n#license-gpl-3.0 #arxiv-2211.00582 #region-us \n",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: N/A\n- Point of Contact: Gila Hayat",
"### Dataset Summary\n\nUSClassActions is an English dataset of 200 complaints from the US Federal Court with the respective binarized judgment outcome (Win/Lose). The dataset poses a challenging text classification task. We are happy to share this dataset in order to promote robustness and fairness studies on the critical area of legal NLP. The data was annotated using URL proprietary tool.",
"### Data Instances",
"### Data Fields\n'id': (int) a unique identifier of the document \\\n'origin_label ': (str) the outcome of the case \\\n'target_text': (str) the facts of the case \\\n'annotator_prediction ': (str) annotators predictions of the case outcome based on the target_text \\\n'annotator_confidence ': (str) the annotator's level of confidence in his outcome prediction \\",
"### Curation Rationale\n\nThe dataset was curated by URL (2022).\n\n\n\n*Gil Semo, Dor Bernsohn, Ben Hagag, Gila Hayat, and Joel Niklaus*\n*ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US*\n*Proceedings of the 2022 Natural Legal Language Processing Workshop. Abu Dhabi. 2022*"
] |
ee9af9cb8db048248c9a0665691bfc6903d09113 |
# Dataset Card for CLARA-MeD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://clara-nlp.uned.es/home/med/](https://clara-nlp.uned.es/home/med/)
- **Repository:** [https://github.com/lcampillos/CLARA-MeD](https://github.com/lcampillos/CLARA-MeD)
- **Paper:** [http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6439](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6439)
- **DOI:** [https://doi.org/10.20350/digitalCSIC/14644](https://doi.org/10.20350/digitalCSIC/14644)
- **Point of Contact:** [Leonardo Campillos-Llanos]([email protected])
### Dataset Summary
A parallel corpus with a subset of 3800 sentence pairs of professional and laymen variants (149 862 tokens) as a benchmark for medical text simplification. This dataset was collected in the CLARA-MeD project, with the goal of simplifying medical texts in the Spanish language and reducing the language barrier to patient's informed decision making.
### Supported Tasks and Leaderboards
Medical text simplification
### Languages
Spanish
## Dataset Structure
### Data Instances
For each instance, there is a string for the source text (professional version), and a string for the target text (simplified version).
```
{'SOURCE': 'adenocarcinoma ductal de páncreas'
'TARGET': 'Cáncer de páncreas'}
```
### Data Fields
- `SOURCE`: a string containing the professional version.
- `TARGET`: a string containing the simplified version.
## Dataset Creation
### Source Data
#### Who are the source language producers?
1. Drug leaflets and summaries of product characteristics from [CIMA](https://cima.aemps.es)
2. Cancer-related information summaries from the [National Cancer Institute](https://www.cancer.gov/)
3. Clinical trials announcements from [EudraCT](https://www.clinicaltrialsregister.eu/)
### Annotations
#### Annotation process
Semi-automatic alignment of technical and patient versions of medical sentences. Inter-annotator agreement measured with Cohen's Kappa (average Kappa = 0.839 +- 0.076; very high agreement).
#### Who are the annotators?
Leonardo Campillos-Llanos
Adrián Capllonch-Carriónb
Ana Rosa Terroba-Reinares
Ana Valverde-Mateos
Sofía Zakhir-Puig
### Personal and Sensitive Information
No personal and sensitive information was used.
### Licensing Information
These data are aimed at research and educational purposes, and released under a Creative Commons Non-Commercial Attribution (CC-BY-NC-A) 4.0 International License.
### Citation Information
Campillos Llanos, L., Terroba Reinares, A. R., Zakhir Puig, S., Valverde, A., & Capllonch-Carrión, A. (2022). Building a comparable corpus and a benchmark for Spanish medical text simplification. *Procesamiento del lenguaje natural*, 69, pp. 189--196.
### Contributions
Thanks to [Jónathan Heras from Universidad de La Rioja](http://www.unirioja.es/cu/joheras) ([@joheras](https://github.com/joheras)) for formatting this dataset for Hugging Face.
| CLARA-MeD/CLARA-MeD | [
"license:cc-by-nc-4.0",
"region:us"
] | 2022-10-25T13:26:10+00:00 | {"license": "cc-by-nc-4.0"} | 2022-10-25T13:54:04+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
# Dataset Card for CLARA-MeD
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Source Data
- Annotations
- Personal and Sensitive Information
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- DOI: URL
- Point of Contact: Leonardo Campillos-Llanos
### Dataset Summary
A parallel corpus with a subset of 3800 sentence pairs of professional and laymen variants (149 862 tokens) as a benchmark for medical text simplification. This dataset was collected in the CLARA-MeD project, with the goal of simplifying medical texts in the Spanish language and reducing the language barrier to patient's informed decision making.
### Supported Tasks and Leaderboards
Medical text simplification
### Languages
Spanish
## Dataset Structure
### Data Instances
For each instance, there is a string for the source text (professional version), and a string for the target text (simplified version).
### Data Fields
- 'SOURCE': a string containing the professional version.
- 'TARGET': a string containing the simplified version.
## Dataset Creation
### Source Data
#### Who are the source language producers?
1. Drug leaflets and summaries of product characteristics from CIMA
2. Cancer-related information summaries from the National Cancer Institute
3. Clinical trials announcements from EudraCT
### Annotations
#### Annotation process
Semi-automatic alignment of technical and patient versions of medical sentences. Inter-annotator agreement measured with Cohen's Kappa (average Kappa = 0.839 +- 0.076; very high agreement).
#### Who are the annotators?
Leonardo Campillos-Llanos
Adrián Capllonch-Carriónb
Ana Rosa Terroba-Reinares
Ana Valverde-Mateos
Sofía Zakhir-Puig
### Personal and Sensitive Information
No personal and sensitive information was used.
### Licensing Information
These data are aimed at research and educational purposes, and released under a Creative Commons Non-Commercial Attribution (CC-BY-NC-A) 4.0 International License.
Campillos Llanos, L., Terroba Reinares, A. R., Zakhir Puig, S., Valverde, A., & Capllonch-Carrión, A. (2022). Building a comparable corpus and a benchmark for Spanish medical text simplification. *Procesamiento del lenguaje natural*, 69, pp. 189--196.
### Contributions
Thanks to Jónathan Heras from Universidad de La Rioja (@joheras) for formatting this dataset for Hugging Face.
| [
"# Dataset Card for CLARA-MeD",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- DOI: URL\n- Point of Contact: Leonardo Campillos-Llanos",
"### Dataset Summary\n\nA parallel corpus with a subset of 3800 sentence pairs of professional and laymen variants (149 862 tokens) as a benchmark for medical text simplification. This dataset was collected in the CLARA-MeD project, with the goal of simplifying medical texts in the Spanish language and reducing the language barrier to patient's informed decision making.",
"### Supported Tasks and Leaderboards\n\nMedical text simplification",
"### Languages\n\nSpanish",
"## Dataset Structure",
"### Data Instances\n\nFor each instance, there is a string for the source text (professional version), and a string for the target text (simplified version).",
"### Data Fields\n\n- 'SOURCE': a string containing the professional version. \n- 'TARGET': a string containing the simplified version.",
"## Dataset Creation",
"### Source Data",
"#### Who are the source language producers?\n\n1. Drug leaflets and summaries of product characteristics from CIMA\n2. Cancer-related information summaries from the National Cancer Institute\n3. Clinical trials announcements from EudraCT",
"### Annotations",
"#### Annotation process\n\nSemi-automatic alignment of technical and patient versions of medical sentences. Inter-annotator agreement measured with Cohen's Kappa (average Kappa = 0.839 +- 0.076; very high agreement).",
"#### Who are the annotators?\n\nLeonardo Campillos-Llanos\nAdrián Capllonch-Carriónb\nAna Rosa Terroba-Reinares\nAna Valverde-Mateos\nSofía Zakhir-Puig",
"### Personal and Sensitive Information\n\nNo personal and sensitive information was used.",
"### Licensing Information\n\nThese data are aimed at research and educational purposes, and released under a Creative Commons Non-Commercial Attribution (CC-BY-NC-A) 4.0 International License.\n\n\n\nCampillos Llanos, L., Terroba Reinares, A. R., Zakhir Puig, S., Valverde, A., & Capllonch-Carrión, A. (2022). Building a comparable corpus and a benchmark for Spanish medical text simplification. *Procesamiento del lenguaje natural*, 69, pp. 189--196.",
"### Contributions\n\nThanks to Jónathan Heras from Universidad de La Rioja (@joheras) for formatting this dataset for Hugging Face."
] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"# Dataset Card for CLARA-MeD",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- DOI: URL\n- Point of Contact: Leonardo Campillos-Llanos",
"### Dataset Summary\n\nA parallel corpus with a subset of 3800 sentence pairs of professional and laymen variants (149 862 tokens) as a benchmark for medical text simplification. This dataset was collected in the CLARA-MeD project, with the goal of simplifying medical texts in the Spanish language and reducing the language barrier to patient's informed decision making.",
"### Supported Tasks and Leaderboards\n\nMedical text simplification",
"### Languages\n\nSpanish",
"## Dataset Structure",
"### Data Instances\n\nFor each instance, there is a string for the source text (professional version), and a string for the target text (simplified version).",
"### Data Fields\n\n- 'SOURCE': a string containing the professional version. \n- 'TARGET': a string containing the simplified version.",
"## Dataset Creation",
"### Source Data",
"#### Who are the source language producers?\n\n1. Drug leaflets and summaries of product characteristics from CIMA\n2. Cancer-related information summaries from the National Cancer Institute\n3. Clinical trials announcements from EudraCT",
"### Annotations",
"#### Annotation process\n\nSemi-automatic alignment of technical and patient versions of medical sentences. Inter-annotator agreement measured with Cohen's Kappa (average Kappa = 0.839 +- 0.076; very high agreement).",
"#### Who are the annotators?\n\nLeonardo Campillos-Llanos\nAdrián Capllonch-Carriónb\nAna Rosa Terroba-Reinares\nAna Valverde-Mateos\nSofía Zakhir-Puig",
"### Personal and Sensitive Information\n\nNo personal and sensitive information was used.",
"### Licensing Information\n\nThese data are aimed at research and educational purposes, and released under a Creative Commons Non-Commercial Attribution (CC-BY-NC-A) 4.0 International License.\n\n\n\nCampillos Llanos, L., Terroba Reinares, A. R., Zakhir Puig, S., Valverde, A., & Capllonch-Carrión, A. (2022). Building a comparable corpus and a benchmark for Spanish medical text simplification. *Procesamiento del lenguaje natural*, 69, pp. 189--196.",
"### Contributions\n\nThanks to Jónathan Heras from Universidad de La Rioja (@joheras) for formatting this dataset for Hugging Face."
] |
7f368064f1df591ec2cba22cab730eb8e9a53104 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664175 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T13:29:26+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T14:21:54+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
193f68d798850e2a593c181844a60af8b12267ed | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664174 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T13:29:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T13:57:27+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
5f080cd1756fbe0260163aefce18f65dbd0231f4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664170 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T13:29:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-125m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T13:30:11+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-125m\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-125m\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
45ec734c3aa4ead5700762bee975f44b17e88c23 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664176 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T13:29:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T15:42:14+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
673278884406b493c92a897afdedd8b19d7778a9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev_cot-mathemak-6b9a5d-1879664171 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T13:29:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-350m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev_cot", "dataset_config": "mathemakitten--winobias_antistereotype_dev_cot", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T13:31:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_dev_cot
* Config: mathemakitten--winobias_antistereotype_dev_cot
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-350m\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-350m\n* Dataset: mathemakitten/winobias_antistereotype_dev_cot\n* Config: mathemakitten--winobias_antistereotype_dev_cot\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
1fc8d17a6617ec0ea4d098ff55b497b6a40187ec | # PKLot 50
This dataset comprises 50 fully annotated images. The original images are were introduced in [*PKLot – A robust dataset for parking lot classification*](https://www.inf.ufpr.br/lesoliveira/download/ESWA2015.pdf).
## Labeling Method
Labeling was manually completed using CVAT with the assistance of Voxel51 for inspection.
## Original dataset citation info
Almeida, P., Oliveira, L. S., Silva Jr, E., Britto Jr, A., Koerich, A., PKLot – A robust dataset for parking lot classification, Expert Systems with Applications, 42(11):4937-4949, 2015.
| ajankelo/pklot_50 | [
"language:en",
"license:cc-by-4.0",
"PKLot",
"object detection",
"region:us"
] | 2022-10-25T14:21:17+00:00 | {"language": "en", "license": "cc-by-4.0", "tags": ["PKLot", "object detection"]} | 2022-10-28T13:39:22+00:00 | [] | [
"en"
] | TAGS
#language-English #license-cc-by-4.0 #PKLot #object detection #region-us
| # PKLot 50
This dataset comprises 50 fully annotated images. The original images are were introduced in *PKLot – A robust dataset for parking lot classification*.
## Labeling Method
Labeling was manually completed using CVAT with the assistance of Voxel51 for inspection.
## Original dataset citation info
Almeida, P., Oliveira, L. S., Silva Jr, E., Britto Jr, A., Koerich, A., PKLot – A robust dataset for parking lot classification, Expert Systems with Applications, 42(11):4937-4949, 2015.
| [
"# PKLot 50\nThis dataset comprises 50 fully annotated images. The original images are were introduced in *PKLot – A robust dataset for parking lot classification*.",
"## Labeling Method\nLabeling was manually completed using CVAT with the assistance of Voxel51 for inspection.",
"## Original dataset citation info\nAlmeida, P., Oliveira, L. S., Silva Jr, E., Britto Jr, A., Koerich, A., PKLot – A robust dataset for parking lot classification, Expert Systems with Applications, 42(11):4937-4949, 2015."
] | [
"TAGS\n#language-English #license-cc-by-4.0 #PKLot #object detection #region-us \n",
"# PKLot 50\nThis dataset comprises 50 fully annotated images. The original images are were introduced in *PKLot – A robust dataset for parking lot classification*.",
"## Labeling Method\nLabeling was manually completed using CVAT with the assistance of Voxel51 for inspection.",
"## Original dataset citation info\nAlmeida, P., Oliveira, L. S., Silva Jr, E., Britto Jr, A., Koerich, A., PKLot – A robust dataset for parking lot classification, Expert Systems with Applications, 42(11):4937-4949, 2015."
] |
4cb09996580bc8efbc747911f8eb5e96340ef5a4 |
# Dataset Card for Wine Recognition dataset
## Dataset Description
- **Homepage:** https://archive.ics.uci.edu/ml/datasets/wine
- **Papers:**
1. S. Aeberhard, D. Coomans and O. de Vel,
Comparison of Classifiers in High Dimensional Settings,
Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
2. S. Aeberhard, D. Coomans and O. de Vel,
"THE CLASSIFICATION PERFORMANCE OF RDA"
Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
- **Point of Contact:** stefan'@'coral.cs.jcu.edu.au
### Dataset Summary
These data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. In a classification context, this is a well posed problem with "well behaved" class structures. A good data set for first testing of a new classifier, but not very challenging.
### Supported Tasks and Leaderboards
Classification (cultivar) from continuous variables (all other variables)
## Dataset Structure
### Data Instances
178 wines
### Data Fields
1. Wine category (cultivar)
2. Alcohol
3. Malic acid
4. Ash
5. Alcalinity of ash
6. Magnesium
7. Total phenols
8. Flavanoids
9. Nonflavanoid phenols
10. Proanthocyanins
11. Color intensity
12. Hue
13. OD280/OD315 of diluted wines
14. Proline
### Data Splits
None
## Dataset Creation
### Source Data
https://archive.ics.uci.edu/ml/datasets/wine
#### Initial Data Collection and Normalization
Original Owners:
Forina, M. et al, PARVUS -
An Extendible Package for Data Exploration, Classification and Correlation.
Institute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno,
16147 Genoa, Italy.
## Additional Information
### Dataset Curators
Stefan Aeberhard
### Licensing Information
No information found on the original website | katossky/wine-recognition | [
"task_categories:tabular-classification",
"task_ids:tabular-multi-class-classification",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"size_categories:n<1K",
"source_datasets:original",
"license:unknown",
"region:us"
] | 2022-10-25T15:15:53+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": [], "license": ["unknown"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["tabular-classification"], "task_ids": ["tabular-multi-class-classification"], "pretty_name": "Wine Recognition Dataset"} | 2022-10-29T09:22:58+00:00 | [] | [] | TAGS
#task_categories-tabular-classification #task_ids-tabular-multi-class-classification #annotations_creators-no-annotation #language_creators-expert-generated #size_categories-n<1K #source_datasets-original #license-unknown #region-us
|
# Dataset Card for Wine Recognition dataset
## Dataset Description
- Homepage: URL
- Papers:
1. S. Aeberhard, D. Coomans and O. de Vel,
Comparison of Classifiers in High Dimensional Settings,
Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
2. S. Aeberhard, D. Coomans and O. de Vel,
"THE CLASSIFICATION PERFORMANCE OF RDA"
Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of
Mathematics and Statistics, James Cook University of North Queensland.
- Point of Contact: stefan'@'URL
### Dataset Summary
These data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. In a classification context, this is a well posed problem with "well behaved" class structures. A good data set for first testing of a new classifier, but not very challenging.
### Supported Tasks and Leaderboards
Classification (cultivar) from continuous variables (all other variables)
## Dataset Structure
### Data Instances
178 wines
### Data Fields
1. Wine category (cultivar)
2. Alcohol
3. Malic acid
4. Ash
5. Alcalinity of ash
6. Magnesium
7. Total phenols
8. Flavanoids
9. Nonflavanoid phenols
10. Proanthocyanins
11. Color intensity
12. Hue
13. OD280/OD315 of diluted wines
14. Proline
### Data Splits
None
## Dataset Creation
### Source Data
URL
#### Initial Data Collection and Normalization
Original Owners:
Forina, M. et al, PARVUS -
An Extendible Package for Data Exploration, Classification and Correlation.
Institute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno,
16147 Genoa, Italy.
## Additional Information
### Dataset Curators
Stefan Aeberhard
### Licensing Information
No information found on the original website | [
"# Dataset Card for Wine Recognition dataset",
"## Dataset Description\n\n- Homepage: URL\n- Papers:\n 1. S. Aeberhard, D. Coomans and O. de Vel,\n Comparison of Classifiers in High Dimensional Settings,\n Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of\n Mathematics and Statistics, James Cook University of North Queensland.\n 2. S. Aeberhard, D. Coomans and O. de Vel,\n \"THE CLASSIFICATION PERFORMANCE OF RDA\"\n Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of\n Mathematics and Statistics, James Cook University of North Queensland.\n- Point of Contact: stefan'@'URL",
"### Dataset Summary\n\nThese data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. In a classification context, this is a well posed problem with \"well behaved\" class structures. A good data set for first testing of a new classifier, but not very challenging.",
"### Supported Tasks and Leaderboards\n\nClassification (cultivar) from continuous variables (all other variables)",
"## Dataset Structure",
"### Data Instances\n\n178 wines",
"### Data Fields\n\n1. Wine category (cultivar)\n2. Alcohol\n3. Malic acid\n4. Ash\n5. Alcalinity of ash\n6. Magnesium\n7. Total phenols\n8. Flavanoids\n9. Nonflavanoid phenols\n10. Proanthocyanins\n11. Color intensity\n12. Hue\n13. OD280/OD315 of diluted wines\n14. Proline",
"### Data Splits\n\nNone",
"## Dataset Creation",
"### Source Data\n\nURL",
"#### Initial Data Collection and Normalization\n\nOriginal Owners:\n\nForina, M. et al, PARVUS -\nAn Extendible Package for Data Exploration, Classification and Correlation.\nInstitute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno,\n16147 Genoa, Italy.",
"## Additional Information",
"### Dataset Curators\n\nStefan Aeberhard",
"### Licensing Information\n\nNo information found on the original website"
] | [
"TAGS\n#task_categories-tabular-classification #task_ids-tabular-multi-class-classification #annotations_creators-no-annotation #language_creators-expert-generated #size_categories-n<1K #source_datasets-original #license-unknown #region-us \n",
"# Dataset Card for Wine Recognition dataset",
"## Dataset Description\n\n- Homepage: URL\n- Papers:\n 1. S. Aeberhard, D. Coomans and O. de Vel,\n Comparison of Classifiers in High Dimensional Settings,\n Tech. Rep. no. 92-02, (1992), Dept. of Computer Science and Dept. of\n Mathematics and Statistics, James Cook University of North Queensland.\n 2. S. Aeberhard, D. Coomans and O. de Vel,\n \"THE CLASSIFICATION PERFORMANCE OF RDA\"\n Tech. Rep. no. 92-01, (1992), Dept. of Computer Science and Dept. of\n Mathematics and Statistics, James Cook University of North Queensland.\n- Point of Contact: stefan'@'URL",
"### Dataset Summary\n\nThese data are the results of a chemical analysis of wines grown in the same region in Italy but derived from three different cultivars. The analysis determined the quantities of 13 constituents found in each of the three types of wines. In a classification context, this is a well posed problem with \"well behaved\" class structures. A good data set for first testing of a new classifier, but not very challenging.",
"### Supported Tasks and Leaderboards\n\nClassification (cultivar) from continuous variables (all other variables)",
"## Dataset Structure",
"### Data Instances\n\n178 wines",
"### Data Fields\n\n1. Wine category (cultivar)\n2. Alcohol\n3. Malic acid\n4. Ash\n5. Alcalinity of ash\n6. Magnesium\n7. Total phenols\n8. Flavanoids\n9. Nonflavanoid phenols\n10. Proanthocyanins\n11. Color intensity\n12. Hue\n13. OD280/OD315 of diluted wines\n14. Proline",
"### Data Splits\n\nNone",
"## Dataset Creation",
"### Source Data\n\nURL",
"#### Initial Data Collection and Normalization\n\nOriginal Owners:\n\nForina, M. et al, PARVUS -\nAn Extendible Package for Data Exploration, Classification and Correlation.\nInstitute of Pharmaceutical and Food Analysis and Technologies, Via Brigata Salerno,\n16147 Genoa, Italy.",
"## Additional Information",
"### Dataset Curators\n\nStefan Aeberhard",
"### Licensing Information\n\nNo information found on the original website"
] |
68de10d8afbe20cad6c000a2553d533209fad025 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064213 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T16:30:28+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-350m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T16:31:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
7e69f670cfbb39f3508e80e451ce7b23670decad | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064214 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T16:30:30+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T18:35:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
4835a4ee92aee9bac60ad7dc8154c1f53d9ab40a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064210 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T16:30:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-125m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T16:31:17+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-125m
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-125m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-125m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
a5b40e34984ddd95bfeb302b23bcf53b95714bf7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064212 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T16:30:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T17:28:08+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
b562e2007d01f1bafc34a270b018a1269e74ed9f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064215 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T16:30:33+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T16:44:32+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
399a3b63758d394fbf31111d478a13aaa3a4539d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-f8e841-1882064209 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-25T16:30:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "ArthurZ/opt-350m", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-25T16:32:11+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: ArthurZ/opt-350m
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-350m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: ArthurZ/opt-350m\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
9920e8130b63513c598a6cdde10df3e2728bccef | # Dataset Card for "financial-news-articles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
The data was obtained from [here](https://www.kaggle.com/datasets/jeet2016/us-financial-news-articles) | ashraq/financial-news-articles | [
"region:us"
] | 2022-10-25T16:59:05+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 848347009, "num_examples": 306242}], "download_size": 492243206, "dataset_size": 848347009}} | 2022-10-25T17:01:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "financial-news-articles"
More Information needed
The data was obtained from here | [
"# Dataset Card for \"financial-news-articles\"\n\nMore Information needed\n\nThe data was obtained from here"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"financial-news-articles\"\n\nMore Information needed\n\nThe data was obtained from here"
] |
d57e1e36be67089516b1a173bdfe1ddc74d00d12 | # Dataset Card for "code_search_data-pep8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tomekkorbak/code_search_data-pep8 | [
"region:us"
] | 2022-10-25T18:35:59+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}, {"name": "func_path_in_repository", "dtype": "string"}, {"name": "func_name", "dtype": "string"}, {"name": "whole_func_string", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "func_code_string", "dtype": "string"}, {"name": "func_code_tokens", "sequence": "string"}, {"name": "func_documentation_string", "dtype": "string"}, {"name": "func_documentation_tokens", "sequence": "string"}, {"name": "split_name", "dtype": "string"}, {"name": "func_code_url", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 1373345211.3356366, "num_examples": 362178}, {"name": "train", "num_bytes": 189595338.66436344, "num_examples": 50000}], "download_size": 695684763, "dataset_size": 1562940550.0}} | 2022-10-25T18:44:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "code_search_data-pep8"
More Information needed | [
"# Dataset Card for \"code_search_data-pep8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"code_search_data-pep8\"\n\nMore Information needed"
] |
9383a22eb926bd0335a2ad67f642b75b7f2ac33d | # Dataset Card for "codeparrot-pep8-scored"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tomekkorbak/codeparrot-pep8-scored | [
"region:us"
] | 2022-10-25T19:12:34+00:00 | {"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "hash", "dtype": "int64"}, {"name": "line_mean", "dtype": "float64"}, {"name": "line_max", "dtype": "int64"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "autogenerated", "dtype": "bool"}, {"name": "ratio", "dtype": "float64"}, {"name": "config_test", "dtype": "bool"}, {"name": "has_no_keywords", "dtype": "bool"}, {"name": "few_assignments", "dtype": "bool"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 1556261021.25, "num_examples": 150000}, {"name": "train", "num_bytes": 518753673.75, "num_examples": 50000}], "download_size": 771399764, "dataset_size": 2075014695.0}} | 2022-10-25T19:14:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "codeparrot-pep8-scored"
More Information needed | [
"# Dataset Card for \"codeparrot-pep8-scored\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"codeparrot-pep8-scored\"\n\nMore Information needed"
] |
e64c6762a193e9c8b2bf95454422a560b1c5ca87 | # Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lipaoMai/github-issues | [
"region:us"
] | 2022-10-25T19:17:29+00:00 | {"dataset_info": {"features": [{"name": "patient_id", "dtype": "int64"}, {"name": "drugName", "dtype": "string"}, {"name": "condition", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "rating", "dtype": "float64"}, {"name": "date", "dtype": "string"}, {"name": "usefulCount", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 28367208, "num_examples": 53471}, {"name": "train", "num_bytes": 85172055, "num_examples": 160398}], "download_size": 63481104, "dataset_size": 113539263}} | 2022-10-25T19:17:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "github-issues"
More Information needed | [
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
0da2571fe18ccc3748f7f202ee300a5824b33e37 | # Dataset Card for "drug_one_1dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lipaoMai/drug_one_1dataset | [
"region:us"
] | 2022-10-25T19:27:48+00:00 | {"dataset_info": {"features": [{"name": "patient_id", "dtype": "int64"}, {"name": "drugName", "dtype": "string"}, {"name": "condition", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "rating", "dtype": "float64"}, {"name": "date", "dtype": "string"}, {"name": "usefulCount", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 28367208, "num_examples": 53471}, {"name": "train", "num_bytes": 85172055, "num_examples": 160398}], "download_size": 63481104, "dataset_size": 113539263}} | 2022-10-25T19:27:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "drug_one_1dataset"
More Information needed | [
"# Dataset Card for \"drug_one_1dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"drug_one_1dataset\"\n\nMore Information needed"
] |
63f32b8f7bb300c1ac35e9146b38e7e2704c714d | This is a repreprocessed version of [P3](https://huggingface.co/datasets/bigscience/P3) with any updates that have been made to the P3 datasets since the release of the original P3. It is used for the finetuning of [bloomz-p3](https://huggingface.co/bigscience/bloomz-p3) & [mt0-xxl-p3](https://huggingface.co/bigscience/mt0-xxl-p3). The script is available [here](https://github.com/bigscience-workshop/bigscience/blob/638e66e40395dbfab9fa08a662d43b317fb2eb38/data/p3/prepare_p3.py).
| Muennighoff/P3 | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-10-25T19:29:10+00:00 | {"annotations_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "P3"} | 2022-11-03T15:15:39+00:00 | [] | [
"en"
] | TAGS
#task_categories-other #annotations_creators-crowdsourced #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-100M<n<1B #language-English #license-apache-2.0 #region-us
| This is a repreprocessed version of P3 with any updates that have been made to the P3 datasets since the release of the original P3. It is used for the finetuning of bloomz-p3 & mt0-xxl-p3. The script is available here.
| [] | [
"TAGS\n#task_categories-other #annotations_creators-crowdsourced #annotations_creators-expert-generated #multilinguality-monolingual #size_categories-100M<n<1B #language-English #license-apache-2.0 #region-us \n"
] |
5ec4fd478a40966b89315c2ad181766210c6a9d7 | # Dataset Card for OLM May 2017 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo [here](https://github.com/huggingface/olm-datasets) from 16% of the May 2017 Common Crawl snapshot.
Note: `last_modified_timestamp` was parsed from whatever a website returned in it's `Last-Modified` header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with `last_modified_timestamp`. | olm/olm-CC-MAIN-2017-22-sampling-ratio-0.16178770949 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:en",
"pretraining",
"language modelling",
"common crawl",
"web",
"region:us"
] | 2022-10-25T21:33:21+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "OLM May 2017 Common Crawl", "tags": ["pretraining", "language modelling", "common crawl", "web"]} | 2022-11-04T17:12:48+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #language-English #pretraining #language modelling #common crawl #web #region-us
| # Dataset Card for OLM May 2017 Common Crawl
Cleaned and deduplicated pretraining dataset, created with the OLM repo here from 16% of the May 2017 Common Crawl snapshot.
Note: 'last_modified_timestamp' was parsed from whatever a website returned in it's 'Last-Modified' header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with 'last_modified_timestamp'. | [
"# Dataset Card for OLM May 2017 Common Crawl\nCleaned and deduplicated pretraining dataset, created with the OLM repo here from 16% of the May 2017 Common Crawl snapshot.\n\nNote: 'last_modified_timestamp' was parsed from whatever a website returned in it's 'Last-Modified' header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with 'last_modified_timestamp'."
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #language-English #pretraining #language modelling #common crawl #web #region-us \n",
"# Dataset Card for OLM May 2017 Common Crawl\nCleaned and deduplicated pretraining dataset, created with the OLM repo here from 16% of the May 2017 Common Crawl snapshot.\n\nNote: 'last_modified_timestamp' was parsed from whatever a website returned in it's 'Last-Modified' header; there are likely a small number of outliers that are incorrect, so we recommend removing the outliers before doing statistics with 'last_modified_timestamp'."
] |
43e6c210364333a854e568c24324db3fd67875d8 |
# Magic Armor Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by magic_armor"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/3O5YpWT.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/icDlRiA.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/AcrdSwB.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/hP923FH.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/RzSFggo.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/magic_armor | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-25T22:18:48+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-25T22:27:11+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Magic Armor Embedding / Textual Inversion
=========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the from the file name and replace the 10k steps ver in your folder
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
d7837f0e3a1e66eaa1884e7a29c7a40ad5c76e0a | <h4> Disclosure </h4>
<p> this is my 1st attempt at a embedding, while its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know </p>
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">" art by crusader_knight "</em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>15,000</li>
<li>10,000</li>
<li>6500</li>
</ul>
cheers
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody><tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/jx0F3zi.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/HZkt3Nx.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/MLKhJXL.png"></td>
</tr>
</tbody>
</table>
<h4> Licence </h4>
<p><span>This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:</span> </p>
<ol>
<li>You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content </li>
<li>The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license</li>
<li>You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="https://huggingface.co/spaces/CompVis/stable-diffusion-license">Please read the full license here</a></li>
</ol> | zZWipeoutZz/crusader_knight | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-10-25T22:55:38+00:00 | {"license": "creativeml-openrail-m"} | 2022-10-25T23:47:13+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
| #### Disclosure
this is my 1st attempt at a embedding, while its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know
#### Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
*" art by crusader\_knight "*
add **[ ]** around it to reduce its weight.
#### Included Files
* 15,000
* 10,000
* 6500
cheers
Wipeout
#### Example Pictures
#### Licence
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
<a rel="noopener nofollow" href="URL read the full license here</a>
| [
"#### Disclosure\n\n\n this is my 1st attempt at a embedding, while its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*\" art by crusader\\_knight \"*\n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 15,000\n* 10,000\n* 6500\n\n\ncheers\nWipeout",
"#### Example Pictures",
"#### Licence\n\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: \n\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\n<a rel=\"noopener nofollow\" href=\"URL read the full license here</a>"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"#### Disclosure\n\n\n this is my 1st attempt at a embedding, while its not perfect i hope that you are able to create some nice pieces with it, i am working on improving for the next embedding coming soon, if you have any suggestions or issues please let me know",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*\" art by crusader\\_knight \"*\n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 15,000\n* 10,000\n* 6500\n\n\ncheers\nWipeout",
"#### Example Pictures",
"#### Licence\n\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: \n\n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content\n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\n<a rel=\"noopener nofollow\" href=\"URL read the full license here</a>"
] |
def71b74159a8460ce977fc2ace42e32947fb3fa | # Dataset Card for MoralExceptQA
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [MoralCoT](https://github.com/feradauto/MoralCoT)
- **Paper:** [When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment](https://arxiv.org/abs/2210.01478)
- **Point of Contact:** [Fernando Gonzalez](mailto:[email protected]) , [Zhijing Jin](mailto:[email protected])
### Dataset Summary
Challenge set consisting of moral exception question answering of cases that involve potentially permissible moral exceptions. Our challenge set, MoralExceptQA, is drawn from a series of recent moral psychology studies designed to investigate the flexibility of human moral cognition – specifically, the ability of humans to figure out when it is permissible to break a previously established or well-known rule.
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
Each instance is a rule-breaking scenario acompanied by an average human response.
### Data Fields
- `study`: The moral psychology study. Studies were designed to investigate the ability of humans
to figure out when it is permissible to break a previously established or well-known rule.
- `context`: The context of the scenario. Different context within the same study are potentially governed by the same rule.
- `condition`: Condition in the scenario.
- `scenario`: Text description of the scenario.
- `human.response`: Average human response (scale 0 to 1) equivalent to the % of people that considered that breaking the rule is permissible.
### Data Splits
MoralExceptQA contains one split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
Information about the data collection and annotators can be found in the appendix of [our paper](https://arxiv.org/abs/2210.01478).
### Personal and Sensitive Information
The MoralExceptQA dataset does not have privacy concerns.
## Considerations for Using the Data
### Social Impact of Dataset
The intended use of this work is to contribute to AI safety research. We do not intend this work to be developed as a tool to automate moral decision-making on behalf of humans, but instead as a way of mitigating risks caused by LLMs’ misunderstanding of human values. The MoralExceptQA dataset does not have privacy concerns or offensive content.
### Discussion of Biases
Our subjects are U.S. residents, and therefore our conclusions are limited to this population.
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The MoralExceptQA dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2210.01478,
doi = {10.48550/ARXIV.2210.01478},
url = {https://arxiv.org/abs/2210.01478},
author = {Jin, Zhijing and Levine, Sydney and Gonzalez, Fernando and Kamal, Ojasv and Sap, Maarten and Sachan, Mrinmaya and Mihalcea, Rada and Tenenbaum, Josh and Schölkopf, Bernhard},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Computers and Society (cs.CY), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution Share Alike 4.0 International}
}
``` | feradauto/MoralExceptQA | [
"task_categories:text-classification",
"arxiv:2210.01478",
"region:us"
] | 2022-10-25T23:26:07+00:00 | {"task_categories": ["text-classification"], "pretty_name": "MoralExceptQA"} | 2022-10-27T14:42:04+00:00 | [
"2210.01478"
] | [] | TAGS
#task_categories-text-classification #arxiv-2210.01478 #region-us
| # Dataset Card for MoralExceptQA
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Repository: MoralCoT
- Paper: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment
- Point of Contact: Fernando Gonzalez , Zhijing Jin
### Dataset Summary
Challenge set consisting of moral exception question answering of cases that involve potentially permissible moral exceptions. Our challenge set, MoralExceptQA, is drawn from a series of recent moral psychology studies designed to investigate the flexibility of human moral cognition – specifically, the ability of humans to figure out when it is permissible to break a previously established or well-known rule.
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
Each instance is a rule-breaking scenario acompanied by an average human response.
### Data Fields
- 'study': The moral psychology study. Studies were designed to investigate the ability of humans
to figure out when it is permissible to break a previously established or well-known rule.
- 'context': The context of the scenario. Different context within the same study are potentially governed by the same rule.
- 'condition': Condition in the scenario.
- 'scenario': Text description of the scenario.
- 'human.response': Average human response (scale 0 to 1) equivalent to the % of people that considered that breaking the rule is permissible.
### Data Splits
MoralExceptQA contains one split.
## Dataset Creation
### Curation Rationale
### Source Data
Information about the data collection and annotators can be found in the appendix of our paper.
### Personal and Sensitive Information
The MoralExceptQA dataset does not have privacy concerns.
## Considerations for Using the Data
### Social Impact of Dataset
The intended use of this work is to contribute to AI safety research. We do not intend this work to be developed as a tool to automate moral decision-making on behalf of humans, but instead as a way of mitigating risks caused by LLMs’ misunderstanding of human values. The MoralExceptQA dataset does not have privacy concerns or offensive content.
### Discussion of Biases
Our subjects are U.S. residents, and therefore our conclusions are limited to this population.
## Additional Information
### Dataset Curators
### Licensing Information
The MoralExceptQA dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
| [
"# Dataset Card for MoralExceptQA",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: MoralCoT\n- Paper: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment\n- Point of Contact: Fernando Gonzalez , Zhijing Jin",
"### Dataset Summary\n\nChallenge set consisting of moral exception question answering of cases that involve potentially permissible moral exceptions. Our challenge set, MoralExceptQA, is drawn from a series of recent moral psychology studies designed to investigate the flexibility of human moral cognition – specifically, the ability of humans to figure out when it is permissible to break a previously established or well-known rule.",
"### Languages\n\nThe language in the dataset is English.",
"## Dataset Structure",
"### Data Instances\n\nEach instance is a rule-breaking scenario acompanied by an average human response.",
"### Data Fields\n\n- 'study': The moral psychology study. Studies were designed to investigate the ability of humans\nto figure out when it is permissible to break a previously established or well-known rule.\n- 'context': The context of the scenario. Different context within the same study are potentially governed by the same rule.\n- 'condition': Condition in the scenario.\n- 'scenario': Text description of the scenario.\n- 'human.response': Average human response (scale 0 to 1) equivalent to the % of people that considered that breaking the rule is permissible.",
"### Data Splits\n\nMoralExceptQA contains one split.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data\n\nInformation about the data collection and annotators can be found in the appendix of our paper.",
"### Personal and Sensitive Information\n\n The MoralExceptQA dataset does not have privacy concerns.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe intended use of this work is to contribute to AI safety research. We do not intend this work to be developed as a tool to automate moral decision-making on behalf of humans, but instead as a way of mitigating risks caused by LLMs’ misunderstanding of human values. The MoralExceptQA dataset does not have privacy concerns or offensive content.",
"### Discussion of Biases\n\nOur subjects are U.S. residents, and therefore our conclusions are limited to this population.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThe MoralExceptQA dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License."
] | [
"TAGS\n#task_categories-text-classification #arxiv-2210.01478 #region-us \n",
"# Dataset Card for MoralExceptQA",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Repository: MoralCoT\n- Paper: When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment\n- Point of Contact: Fernando Gonzalez , Zhijing Jin",
"### Dataset Summary\n\nChallenge set consisting of moral exception question answering of cases that involve potentially permissible moral exceptions. Our challenge set, MoralExceptQA, is drawn from a series of recent moral psychology studies designed to investigate the flexibility of human moral cognition – specifically, the ability of humans to figure out when it is permissible to break a previously established or well-known rule.",
"### Languages\n\nThe language in the dataset is English.",
"## Dataset Structure",
"### Data Instances\n\nEach instance is a rule-breaking scenario acompanied by an average human response.",
"### Data Fields\n\n- 'study': The moral psychology study. Studies were designed to investigate the ability of humans\nto figure out when it is permissible to break a previously established or well-known rule.\n- 'context': The context of the scenario. Different context within the same study are potentially governed by the same rule.\n- 'condition': Condition in the scenario.\n- 'scenario': Text description of the scenario.\n- 'human.response': Average human response (scale 0 to 1) equivalent to the % of people that considered that breaking the rule is permissible.",
"### Data Splits\n\nMoralExceptQA contains one split.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data\n\nInformation about the data collection and annotators can be found in the appendix of our paper.",
"### Personal and Sensitive Information\n\n The MoralExceptQA dataset does not have privacy concerns.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe intended use of this work is to contribute to AI safety research. We do not intend this work to be developed as a tool to automate moral decision-making on behalf of humans, but instead as a way of mitigating risks caused by LLMs’ misunderstanding of human values. The MoralExceptQA dataset does not have privacy concerns or offensive content.",
"### Discussion of Biases\n\nOur subjects are U.S. residents, and therefore our conclusions are limited to this population.",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nThe MoralExceptQA dataset is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License."
] |
d017d05d7a9a805bb6cdb2a58abcf1561437011c | # Dataset Card for "Romance-cleaned-1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MarkGG/Romance-cleaned-1 | [
"region:us"
] | 2022-10-26T02:33:21+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5388007.848468044, "num_examples": 6491}, {"name": "validation", "num_bytes": 599313.1515319562, "num_examples": 722}], "download_size": 3844960, "dataset_size": 5987321.0}} | 2022-10-26T02:33:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Romance-cleaned-1"
More Information needed | [
"# Dataset Card for \"Romance-cleaned-1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Romance-cleaned-1\"\n\nMore Information needed"
] |
2f6f064d3cb82533354f710c230caf18bb7c521c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064279 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T03:12:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-26T03:15:41+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
465bad23e3af0249144d4497248a2812d90ccc7d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064280 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T03:12:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-26T03:17:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
692c8e1dcabbe24e337357e5624f1ccb2bae92cc | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot-mathema-acb860-1886064281 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T03:12:26+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-10-26T03:38:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: mathemakitten/winobias_antistereotype_test_cot
* Config: mathemakitten--winobias_antistereotype_test_cot
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: mathemakitten/winobias_antistereotype_test_cot\n* Config: mathemakitten--winobias_antistereotype_test_cot\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
25c4f65bb2c90a1c5ea0f5990287fce9529f3ae2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-14b0f6-1886164287 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T03:39:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-26T03:42:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
0eaa9942f56bc4171844477deb35cb3fa3f7585d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-14b0f6-1886164288 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T03:39:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-26T03:43:01+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
a582213b5f1d8c2c0a507ed7fea78a7863351bdc | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-d57983-1886264289 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T03:39:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-26T03:40:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
5c5bc05f38b66ceb8f0ef48249ea8f70eeaf6489 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-d57983-1886264290 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T03:39:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-26T03:40:35+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
a3b7a1c5b7d2ee5dea4f1016816d4b0a21608ab2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-bd0c63-1886364291 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T03:39:24+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-26T03:40:21+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-base-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
40cc1ba923431846d9c2a83a5b70843f3fcfaf7a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Luciano](https://huggingface.co/Luciano) for evaluating this model. | autoevaluate/autoeval-eval-lener_br-lener_br-bd0c63-1886364292 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-10-26T03:39:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-10-26T03:40:50+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br
* Dataset: lener_br
* Config: lener_br
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Luciano for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/xlm-roberta-large-finetuned-lener_br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Luciano for evaluating this model."
] |
c93949f7140beef4adc404e7b54841e957f81c54 | # Dataset Card for sberdevices_golos_100h_farfield
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Golos ASR corpus](https://www.openslr.org/114)
- **Repository:** [Golos dataset](https://github.com/sberdevices/golos)
- **Paper:** [Golos: Russian Dataset for Speech Research](https://arxiv.org/pdf/2106.10161.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Nikolay Karpov](mailto:[email protected])
### Dataset Summary
Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.
Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.
This dataset is a simpler version of the above mentioned Golos:
- it includes the farfield domain only (without any sound from the crowd domain);
- validation split is built on the 10-hour training subset;
- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;
- test split is a full original test split.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': None,
'array': array([ 1.22070312e-04, 1.22070312e-04, 9.15527344e-05, ...,
6.10351562e-05, 6.10351562e-05, 3.05175781e-05]), dtype=float64),
'sampling_rate': 16000},
'transcription': 'джой источники истории турции'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset is a simpler version of the original Golos:
- it includes the farfield domain only (without any sound from the crowd domain);
- validation split is built on the 10-hour training subset;
- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;
- test split is a full original test split.
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 9570 | 933 | 1916 |
| hours | 10.3h | 1.0h | 1.4h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated on the crowd-sourcing platform.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.
### Licensing Information
[Public license with attribution and conditions reserved](https://github.com/sberdevices/golos/blob/master/license/en_us.pdf)
### Citation Information
```
@misc{karpov2021golos,
author = {Karpov, Nikolay and Denisenko, Alexander and Minkin, Fedor},
title = {Golos: Russian Dataset for Speech Research},
publisher = {arXiv},
year = {2021},
url = {https://arxiv.org/abs/2106.10161}
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset.
| bond005/sberdevices_golos_100h_farfield | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended",
"language:ru",
"license:other",
"arxiv:2106.10161",
"region:us"
] | 2022-10-26T04:04:50+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["ru"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "source_datasets": ["extended"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "paperswithcode_id": "golos", "pretty_name": "Golos"} | 2022-10-27T03:23:04+00:00 | [
"2106.10161"
] | [
"ru"
] | TAGS
#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100k #source_datasets-extended #language-Russian #license-other #arxiv-2106.10161 #region-us
| # Dataset Card for sberdevices_golos_100h_farfield
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: Golos ASR corpus
- Repository: Golos dataset
- Paper: Golos: Russian Dataset for Speech Research
- Leaderboard: The Speech Bench
- Point of Contact: Nikolay Karpov
### Dataset Summary
Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.
Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.
This dataset is a simpler version of the above mentioned Golos:
- it includes the farfield domain only (without any sound from the crowd domain);
- validation split is built on the 10-hour training subset;
- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;
- test split is a full original test split.
### Supported Tasks and Leaderboards
- 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at URL The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called 'audio' and its transcription, called 'transcription'. Any additional information about the speaker and the passage which contains the transcription is not provided.
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
- transcription: the transcription of the audio file.
### Data Splits
This dataset is a simpler version of the original Golos:
- it includes the farfield domain only (without any sound from the crowd domain);
- validation split is built on the 10-hour training subset;
- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;
- test split is a full original test split.
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 9570 | 933 | 1916 |
| hours | 10.3h | 1.0h | 1.4h |
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
All recorded audio files were manually annotated on the crowd-sourcing platform.
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.
### Licensing Information
Public license with attribution and conditions reserved
### Contributions
Thanks to @bond005 for adding this dataset.
| [
"# Dataset Card for sberdevices_golos_100h_farfield",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: Golos ASR corpus\n- Repository: Golos dataset\n- Paper: Golos: Russian Dataset for Speech Research\n- Leaderboard: The Speech Bench\n- Point of Contact: Nikolay Karpov",
"### Dataset Summary\nSberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.\nAuthors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.\nThis dataset is a simpler version of the above mentioned Golos:\n- it includes the farfield domain only (without any sound from the crowd domain);\n- validation split is built on the 10-hour training subset;\n- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;\n- test split is a full original test split.",
"### Supported Tasks and Leaderboards\n- 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at URL The leaderboard ranks models uploaded to the Hub based on their WER.",
"### Languages\nThe audio is in Russian.",
"## Dataset Structure",
"### Data Instances\nA typical data point comprises the audio data, usually called 'audio' and its transcription, called 'transcription'. Any additional information about the speaker and the passage which contains the transcription is not provided.",
"### Data Fields\n- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n- transcription: the transcription of the audio file.",
"### Data Splits\nThis dataset is a simpler version of the original Golos:\n- it includes the farfield domain only (without any sound from the crowd domain);\n- validation split is built on the 10-hour training subset;\n- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;\n- test split is a full original test split.\n| | Train | Validation | Test |\n| ----- | ------ | ---------- | ----- |\n| examples | 9570 | 933 | 1916 |\n| hours | 10.3h | 1.0h | 1.4h |",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\nAll recorded audio files were manually annotated on the crowd-sourcing platform.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\nThe dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\nThe dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.",
"### Licensing Information\nPublic license with attribution and conditions reserved",
"### Contributions\nThanks to @bond005 for adding this dataset."
] | [
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100k #source_datasets-extended #language-Russian #license-other #arxiv-2106.10161 #region-us \n",
"# Dataset Card for sberdevices_golos_100h_farfield",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: Golos ASR corpus\n- Repository: Golos dataset\n- Paper: Golos: Russian Dataset for Speech Research\n- Leaderboard: The Speech Bench\n- Point of Contact: Nikolay Karpov",
"### Dataset Summary\nSberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.\nAuthors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.\nThis dataset is a simpler version of the above mentioned Golos:\n- it includes the farfield domain only (without any sound from the crowd domain);\n- validation split is built on the 10-hour training subset;\n- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;\n- test split is a full original test split.",
"### Supported Tasks and Leaderboards\n- 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at URL The leaderboard ranks models uploaded to the Hub based on their WER.",
"### Languages\nThe audio is in Russian.",
"## Dataset Structure",
"### Data Instances\nA typical data point comprises the audio data, usually called 'audio' and its transcription, called 'transcription'. Any additional information about the speaker and the passage which contains the transcription is not provided.",
"### Data Fields\n- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n- transcription: the transcription of the audio file.",
"### Data Splits\nThis dataset is a simpler version of the original Golos:\n- it includes the farfield domain only (without any sound from the crowd domain);\n- validation split is built on the 10-hour training subset;\n- training split corresponds to the 100-hour training subset without sounds from the 10-hour training subset;\n- test split is a full original test split.\n| | Train | Validation | Test |\n| ----- | ------ | ---------- | ----- |\n| examples | 9570 | 933 | 1916 |\n| hours | 10.3h | 1.0h | 1.4h |",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\nAll recorded audio files were manually annotated on the crowd-sourcing platform.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\nThe dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\nThe dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.",
"### Licensing Information\nPublic license with attribution and conditions reserved",
"### Contributions\nThanks to @bond005 for adding this dataset."
] |
a8395938b476a1cf89b6db79853110ee22616fcc | ## Dataset Description
The dataset is the subset of RCV1. These corpus has already been used in author identification experiments. In the top 50 authors (with respect to total size of articles) were selected. 50 authors of texts labeled with at least one subtopic of the class CCAT(corporate/industrial) were selected.That way, it is attempted to minimize the topic factor in distinguishing among the texts. The training corpus consists of 2,500 texts (50 per author) and the test corpus includes other 2,500 texts (50 per author) non-overlapping with the training texts.
- **Homepage:** https://archive.ics.uci.edu/ml/datasets/Reuter_50_50
- **Repository:** https://archive.ics.uci.edu/ml/datasets/Reuter_50_50
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** | yeeb/C50 | [
"license:openrail",
"region:us"
] | 2022-10-26T04:49:50+00:00 | {"license": "openrail"} | 2022-10-26T04:55:06+00:00 | [] | [] | TAGS
#license-openrail #region-us
| ## Dataset Description
The dataset is the subset of RCV1. These corpus has already been used in author identification experiments. In the top 50 authors (with respect to total size of articles) were selected. 50 authors of texts labeled with at least one subtopic of the class CCAT(corporate/industrial) were selected.That way, it is attempted to minimize the topic factor in distinguishing among the texts. The training corpus consists of 2,500 texts (50 per author) and the test corpus includes other 2,500 texts (50 per author) non-overlapping with the training texts.
- Homepage: URL
- Repository: URL
- Paper:
- Leaderboard:
- Point of Contact: | [
"## Dataset Description\nThe dataset is the subset of RCV1. These corpus has already been used in author identification experiments. In the top 50 authors (with respect to total size of articles) were selected. 50 authors of texts labeled with at least one subtopic of the class CCAT(corporate/industrial) were selected.That way, it is attempted to minimize the topic factor in distinguishing among the texts. The training corpus consists of 2,500 texts (50 per author) and the test corpus includes other 2,500 texts (50 per author) non-overlapping with the training texts.\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:"
] | [
"TAGS\n#license-openrail #region-us \n",
"## Dataset Description\nThe dataset is the subset of RCV1. These corpus has already been used in author identification experiments. In the top 50 authors (with respect to total size of articles) were selected. 50 authors of texts labeled with at least one subtopic of the class CCAT(corporate/industrial) were selected.That way, it is attempted to minimize the topic factor in distinguishing among the texts. The training corpus consists of 2,500 texts (50 per author) and the test corpus includes other 2,500 texts (50 per author) non-overlapping with the training texts.\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:"
] |
9f2b30fed6f314b8774d02e290843ecf086b0031 | Relevant Paper - `https://github.com/Hritikbansal/entigen_emnlp`
language of prompts - English | hbXNov/entigen | [
"region:us"
] | 2022-10-26T04:55:43+00:00 | {} | 2022-10-26T06:20:22+00:00 | [] | [] | TAGS
#region-us
| Relevant Paper - 'URL
language of prompts - English | [] | [
"TAGS\n#region-us \n"
] |
f25e9b73b1ff9fa992e8b07dc68a6e5d09fa70fe | # C4 200M
# Dataset Summary
C4 200M Sample Dataset adopted from https://huggingface.co/datasets/liweili/c4_200m
C4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: [C4_200M Synthetic Dataset](https://github.com/google-research-datasets/C4_200M-synthetic-dataset-for-grammatical-error-correction)
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: `input` and `output`. Here is a sample of dataset:
```
{
"input": "Bitcoin is for $7,094 this morning, which CoinDesk says."
"output": "Bitcoin goes for $7,094 this morning, according to CoinDesk."
}
``` | leslyarun/c4_200m_gec_train100k_test25k | [
"task_categories:text-generation",
"source_datasets:allenai/c4",
"language:en",
"grammatical-error-correction",
"region:us"
] | 2022-10-26T06:21:21+00:00 | {"language": ["en"], "source_datasets": ["allenai/c4"], "task_categories": ["text-generation"], "pretty_name": "C4 200M Grammatical Error Correction Dataset", "tags": ["grammatical-error-correction"]} | 2022-10-26T06:59:31+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #source_datasets-allenai/c4 #language-English #grammatical-error-correction #region-us
| # C4 200M
# Dataset Summary
C4 200M Sample Dataset adopted from URL
C4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.
The corruption edits and scripts used to synthesize this dataset is referenced from: C4_200M Synthetic Dataset
# Description
As discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: 'input' and 'output'. Here is a sample of dataset:
| [
"# C4 200M",
"# Dataset Summary\n\n\nC4 200M Sample Dataset adopted from URL \n\nC4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.\n\nThe corruption edits and scripts used to synthesize this dataset is referenced from: C4_200M Synthetic Dataset",
"# Description\nAs discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: 'input' and 'output'. Here is a sample of dataset:"
] | [
"TAGS\n#task_categories-text-generation #source_datasets-allenai/c4 #language-English #grammatical-error-correction #region-us \n",
"# C4 200M",
"# Dataset Summary\n\n\nC4 200M Sample Dataset adopted from URL \n\nC4_200m is a collection of 185 million sentence pairs generated from the cleaned English dataset from C4. This dataset can be used in grammatical error correction (GEC) tasks.\n\nThe corruption edits and scripts used to synthesize this dataset is referenced from: C4_200M Synthetic Dataset",
"# Description\nAs discussed before, this dataset contains 185 million sentence pairs. Each article has these two attributes: 'input' and 'output'. Here is a sample of dataset:"
] |
41c51d1746fa0bd24992037a8a00d68abd21aa76 | # Dataset Card for "food102"
This is based on the [food101](https://huggingface.co/datasets/food101) dataset with an extra class generated with a Stable Diffusion model.
A detailed walk-through is available on [YouTube](https://youtu.be/sIe0eo3fYQ4).
| juliensimon/food102 | [
"region:us"
] | 2022-10-26T07:44:52+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "apple_pie", "1": "baby_back_ribs", "2": "baklava", "3": "beef_carpaccio", "4": "beef_tartare", "5": "beet_salad", "6": "beignets", "7": "bibimbap", "8": "boeuf_bourguignon", "9": "bread_pudding", "10": "breakfast_burrito", "11": "bruschetta", "12": "caesar_salad", "13": "cannoli", "14": "caprese_salad", "15": "carrot_cake", "16": "ceviche", "17": "cheese_plate", "18": "cheesecake", "19": "chicken_curry", "20": "chicken_quesadilla", "21": "chicken_wings", "22": "chocolate_cake", "23": "chocolate_mousse", "24": "churros", "25": "clam_chowder", "26": "club_sandwich", "27": "crab_cakes", "28": "creme_brulee", "29": "croque_madame", "30": "cup_cakes", "31": "deviled_eggs", "32": "donuts", "33": "dumplings", "34": "edamame", "35": "eggs_benedict", "36": "escargots", "37": "falafel", "38": "filet_mignon", "39": "fish_and_chips", "40": "foie_gras", "41": "french_fries", "42": "french_onion_soup", "43": "french_toast", "44": "fried_calamari", "45": "fried_rice", "46": "frozen_yogurt", "47": "garlic_bread", "48": "gnocchi", "49": "greek_salad", "50": "grilled_cheese_sandwich", "51": "grilled_salmon", "52": "guacamole", "53": "gyoza", "54": "hamburger", "55": "hot_and_sour_soup", "56": "hot_dog", "57": "huevos_rancheros", "58": "hummus", "59": "ice_cream", "60": "lasagna", "61": "lobster_bisque", "62": "lobster_roll_sandwich", "63": "macaroni_and_cheese", "64": "macarons", "65": "miso_soup", "66": "mussels", "67": "nachos", "68": "omelette", "69": "onion_rings", "70": "oysters", "71": "pad_thai", "72": "paella", "73": "pancakes", "74": "panna_cotta", "75": "peking_duck", "76": "pho", "77": "pizza", "78": "pork_chop", "79": "poutine", "80": "prime_rib", "81": "pulled_pork_sandwich", "82": "ramen", "83": "ravioli", "84": "red_velvet_cake", "85": "risotto", "86": "samosa", "87": "sashimi", "88": "scallops", "89": "seaweed_salad", "90": "shrimp_and_grits", "91": "spaghetti_bolognese", "92": "spaghetti_carbonara", "93": "spring_rolls", "94": "steak", "95": "strawberry_shortcake", "96": "sushi", "97": "tacos", "98": "takoyaki", "99": "tiramisu", "100": "tuna_tartare", "101": "waffles"}}}}], "splits": [{"name": "test", "num_bytes": 1461368965.25, "num_examples": 25500}, {"name": "train", "num_bytes": 4285789478.25, "num_examples": 76500}], "download_size": 5534173074, "dataset_size": 5747158443.5}} | 2022-10-26T18:43:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "food102"
This is based on the food101 dataset with an extra class generated with a Stable Diffusion model.
A detailed walk-through is available on YouTube.
| [
"# Dataset Card for \"food102\"\n\nThis is based on the food101 dataset with an extra class generated with a Stable Diffusion model. \n\nA detailed walk-through is available on YouTube."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"food102\"\n\nThis is based on the food101 dataset with an extra class generated with a Stable Diffusion model. \n\nA detailed walk-through is available on YouTube."
] |
d3c241cacb6532a7f6d1de771d2ac8827f6bad25 | ## Dataset Description
- **Dataset authors:** [Suno.ai](https://www.suno.ai)
- **Point of contact:** [email protected]
As a part of ESB benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.
The diagnostic dataset can be downloaded and prepared in much the same way as the ESB datasets:
```python
from datasets import load_dataset
esb_diagnostic_ami = load_dataset("esb/diagnostic-dataset", "ami")
```
### Data Selection
#### Audio
To provide an adequate representation of all ESB datasets, we chose to use at least 1 hour of audio from the validation sets of each of the 8 constituent ESB datasets. Following the convention of LibriSpeech, we then used a public ASR model to further split each dataset into `clean`/`other` based on WER. (Note that for LibriSpeech we kept the existing `clean`/`other` splits.). The `clean` subset represents the 'easier' 50% of samples, and the `other` subset the more difficult 50%.
To obtain the `clean` diagnostic-subset of AMI, either "slice" the `clean`/`other` split:
```python
ami_diagnostic_clean = esc_diagnostic_ami["clean"]
```
Or download the `clean` subset standalone:
```python
ami_diagnostic_clean = load_dataset("esb/diagnostic-dataset", "ami", split="clean")
```
#### Transcriptions
Firstly, the transcriptions were generated by a human _without_ the bias of the original transcript. The transcriptions follow a strict orthographic and verbatim style guide, where every word, disfluency and partial word is transcribed. Punctuation and formatting follows standard English print orthography (eg. ‘July 10th in 2021.’). Breaks in thought and partial words are indicated via ‘--’. In addition to the **orthographic** transcriptions, a **normalised** format was produced, with all punctuation removed and non-standard-words such as dates, currencies and abbreviations verbalised in the exact way they are spoken (eg. ’july tenth in twenty twenty one’).
Although great care was taken in standardisation of orthography, a remaining amount of ambiguity in transcription exists, especially around the use of commas and the choice of introducing sentence breaks for utterances starting with ‘And’. Each sample was then checked by a second human with access to both the original ground truth as well as the independently produced style-consistent transcript. Both versions were merged to produce new high quality ground truths in both the normalised and orthographic text format.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through `load_dataset`:
```python
print(ami_diagnostic_clean[0])
```
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
```python
{
'audio': {'path': None,
'array': array([ 7.01904297e-04, 7.32421875e-04, 7.32421875e-04, ...,
-2.74658203e-04, -1.83105469e-04, -3.05175781e-05]),
'sampling_rate': 16000},
'ortho_transcript': 'So, I guess we have to reflect on our experiences with remote controls to decide what, um, we would like to see in a convenient practical',
'norm_transcript': 'so i guess we have to reflect on our experiences with remote controls to decide what um we would like to see in a convenient practical',
'id': 'AMI_ES2011a_H00_FEE041_0062835_0064005',
'dataset': 'ami',
}
```
### Data Fields
- `audio`: a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- `ortho_transcript`: the **orthographic** transcription of the audio file.
- `norm_transcript`: the **normalised** transcription of the audio file.
- `id`: unique id of the data sample.
- `dataset`: string name of a dataset the sample belongs to.
We encourage participants to train their ASR system on the [AMI dataset](https://huggingface.co/datasets/esb/datasets#ami), the smallest of the 8 ESB datasets, and then evaluate their system on the `ortho_transcript` for **all** of the datasets in the diagnostic dataset. This gives a representation of how the system is likely to fare on other audio domains. The predictions can then be _normalised_ by removing casing and punctuation, converting numbers to spelled-out form and expanding abbreviations, and then assessed against the `norm_transcript`. This gives a representation of the effect of orthography for system performance.
### Access
All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0
* GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech
* SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech
### Contributions
We show our greatest appreciation to Georg Kucsko, Keenan Freyberg and Michael Shulman from [Suno.ai](https://www.suno.ai) for creating and annotating the diagnostic dataset.
| esb/diagnostic-dataset | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|librispeech_asr",
"source_datasets:extended|common_voice",
"language:en",
"license:cc-by-4.0",
"license:apache-2.0",
"license:cc0-1.0",
"license:cc-by-nc-3.0",
"license:other",
"asr",
"benchmark",
"speech",
"esc",
"region:us"
] | 2022-10-26T09:25:33+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced", "machine-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "license": ["cc-by-4.0", "apache-2.0", "cc0-1.0", "cc-by-nc-3.0", "other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1M<n<10M"], "source_datasets": ["original", "extended|librispeech_asr", "extended|common_voice"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "ESB Diagnostic Dataset", "tags": ["asr", "benchmark", "speech", "esc"], "extra_gated_prompt": "Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. \nTo do so, fill in the access forms on the specific datasets' pages:\n * Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_9_0\n * GigaSpeech: https://huggingface.co/datasets/speechcolab/gigaspeech\n * SPGISpeech: https://huggingface.co/datasets/kensho/spgispeech", "extra_gated_fields": {"I hereby confirm that I have registered on the original Common Voice page and agree to not attempt to determine the identity of speakers in the Common Voice dataset": "checkbox", "I hereby confirm that I have accepted the terms of usages on GigaSpeech page": "checkbox", "I hereby confirm that I have accepted the terms of usages on SPGISpeech page": "checkbox"}} | 2022-10-26T15:42:41+00:00 | [] | [
"en"
] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|librispeech_asr #source_datasets-extended|common_voice #language-English #license-cc-by-4.0 #license-apache-2.0 #license-cc0-1.0 #license-cc-by-nc-3.0 #license-other #asr #benchmark #speech #esc #region-us
| ## Dataset Description
- Dataset authors: URL
- Point of contact: sanchit@URL
As a part of ESB benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.
The diagnostic dataset can be downloaded and prepared in much the same way as the ESB datasets:
### Data Selection
#### Audio
To provide an adequate representation of all ESB datasets, we chose to use at least 1 hour of audio from the validation sets of each of the 8 constituent ESB datasets. Following the convention of LibriSpeech, we then used a public ASR model to further split each dataset into 'clean'/'other' based on WER. (Note that for LibriSpeech we kept the existing 'clean'/'other' splits.). The 'clean' subset represents the 'easier' 50% of samples, and the 'other' subset the more difficult 50%.
To obtain the 'clean' diagnostic-subset of AMI, either "slice" the 'clean'/'other' split:
Or download the 'clean' subset standalone:
#### Transcriptions
Firstly, the transcriptions were generated by a human _without_ the bias of the original transcript. The transcriptions follow a strict orthographic and verbatim style guide, where every word, disfluency and partial word is transcribed. Punctuation and formatting follows standard English print orthography (eg. ‘July 10th in 2021.’). Breaks in thought and partial words are indicated via ‘--’. In addition to the orthographic transcriptions, a normalised format was produced, with all punctuation removed and non-standard-words such as dates, currencies and abbreviations verbalised in the exact way they are spoken (eg. ’july tenth in twenty twenty one’).
Although great care was taken in standardisation of orthography, a remaining amount of ambiguity in transcription exists, especially around the use of commas and the choice of introducing sentence breaks for utterances starting with ‘And’. Each sample was then checked by a second human with access to both the original ground truth as well as the independently produced style-consistent transcript. Both versions were merged to produce new high quality ground truths in both the normalised and orthographic text format.
## Dataset Information
A data point can be accessed by indexing the dataset object loaded through 'load_dataset':
A typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:
### Data Fields
- 'audio': a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.
- 'ortho_transcript': the orthographic transcription of the audio file.
- 'norm_transcript': the normalised transcription of the audio file.
- 'id': unique id of the data sample.
- 'dataset': string name of a dataset the sample belongs to.
We encourage participants to train their ASR system on the AMI dataset, the smallest of the 8 ESB datasets, and then evaluate their system on the 'ortho_transcript' for all of the datasets in the diagnostic dataset. This gives a representation of how the system is likely to fare on other audio domains. The predictions can then be _normalised_ by removing casing and punctuation, converting numbers to spelled-out form and expanding abbreviations, and then assessed against the 'norm_transcript'. This gives a representation of the effect of orthography for system performance.
### Access
All eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:
* Common Voice: URL
* GigaSpeech: URL
* SPGISpeech: URL
### Contributions
We show our greatest appreciation to Georg Kucsko, Keenan Freyberg and Michael Shulman from URL for creating and annotating the diagnostic dataset.
| [
"## Dataset Description\n- Dataset authors: URL\n- Point of contact: sanchit@URL\n\nAs a part of ESB benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.\n\nThe diagnostic dataset can be downloaded and prepared in much the same way as the ESB datasets:",
"### Data Selection",
"#### Audio\nTo provide an adequate representation of all ESB datasets, we chose to use at least 1 hour of audio from the validation sets of each of the 8 constituent ESB datasets. Following the convention of LibriSpeech, we then used a public ASR model to further split each dataset into 'clean'/'other' based on WER. (Note that for LibriSpeech we kept the existing 'clean'/'other' splits.). The 'clean' subset represents the 'easier' 50% of samples, and the 'other' subset the more difficult 50%.\n\nTo obtain the 'clean' diagnostic-subset of AMI, either \"slice\" the 'clean'/'other' split:\n\n\n\nOr download the 'clean' subset standalone:",
"#### Transcriptions\nFirstly, the transcriptions were generated by a human _without_ the bias of the original transcript. The transcriptions follow a strict orthographic and verbatim style guide, where every word, disfluency and partial word is transcribed. Punctuation and formatting follows standard English print orthography (eg. ‘July 10th in 2021.’). Breaks in thought and partial words are indicated via ‘--’. In addition to the orthographic transcriptions, a normalised format was produced, with all punctuation removed and non-standard-words such as dates, currencies and abbreviations verbalised in the exact way they are spoken (eg. ’july tenth in twenty twenty one’). \n\nAlthough great care was taken in standardisation of orthography, a remaining amount of ambiguity in transcription exists, especially around the use of commas and the choice of introducing sentence breaks for utterances starting with ‘And’. Each sample was then checked by a second human with access to both the original ground truth as well as the independently produced style-consistent transcript. Both versions were merged to produce new high quality ground truths in both the normalised and orthographic text format.",
"## Dataset Information\n\nA data point can be accessed by indexing the dataset object loaded through 'load_dataset':\n\n\n\nA typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:",
"### Data Fields\n\n- 'audio': a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.\n\n- 'ortho_transcript': the orthographic transcription of the audio file.\n\n- 'norm_transcript': the normalised transcription of the audio file.\n\n- 'id': unique id of the data sample.\n\n- 'dataset': string name of a dataset the sample belongs to.\n\nWe encourage participants to train their ASR system on the AMI dataset, the smallest of the 8 ESB datasets, and then evaluate their system on the 'ortho_transcript' for all of the datasets in the diagnostic dataset. This gives a representation of how the system is likely to fare on other audio domains. The predictions can then be _normalised_ by removing casing and punctuation, converting numbers to spelled-out form and expanding abbreviations, and then assessed against the 'norm_transcript'. This gives a representation of the effect of orthography for system performance.",
"### Access\nAll eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:\n* Common Voice: URL\n* GigaSpeech: URL\n* SPGISpeech: URL",
"### Contributions\nWe show our greatest appreciation to Georg Kucsko, Keenan Freyberg and Michael Shulman from URL for creating and annotating the diagnostic dataset."
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #annotations_creators-crowdsourced #annotations_creators-machine-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|librispeech_asr #source_datasets-extended|common_voice #language-English #license-cc-by-4.0 #license-apache-2.0 #license-cc0-1.0 #license-cc-by-nc-3.0 #license-other #asr #benchmark #speech #esc #region-us \n",
"## Dataset Description\n- Dataset authors: URL\n- Point of contact: sanchit@URL\n\nAs a part of ESB benchmark, we provide a small, 8h diagnostic dataset of in-domain validation data with newly annotated transcriptions. The audio data is sampled from each of the ESB validation sets, giving a range of different domains and speaking styles. The transcriptions are annotated according to a consistent style guide with two formats: normalised and un-normalised. The dataset is structured in the same way as the ESB dataset, by grouping audio-transcription samples according to the dataset from which they were taken. We encourage participants to use this dataset when evaluating their systems to quickly assess performance on a range of different speech recognition conditions.\n\nThe diagnostic dataset can be downloaded and prepared in much the same way as the ESB datasets:",
"### Data Selection",
"#### Audio\nTo provide an adequate representation of all ESB datasets, we chose to use at least 1 hour of audio from the validation sets of each of the 8 constituent ESB datasets. Following the convention of LibriSpeech, we then used a public ASR model to further split each dataset into 'clean'/'other' based on WER. (Note that for LibriSpeech we kept the existing 'clean'/'other' splits.). The 'clean' subset represents the 'easier' 50% of samples, and the 'other' subset the more difficult 50%.\n\nTo obtain the 'clean' diagnostic-subset of AMI, either \"slice\" the 'clean'/'other' split:\n\n\n\nOr download the 'clean' subset standalone:",
"#### Transcriptions\nFirstly, the transcriptions were generated by a human _without_ the bias of the original transcript. The transcriptions follow a strict orthographic and verbatim style guide, where every word, disfluency and partial word is transcribed. Punctuation and formatting follows standard English print orthography (eg. ‘July 10th in 2021.’). Breaks in thought and partial words are indicated via ‘--’. In addition to the orthographic transcriptions, a normalised format was produced, with all punctuation removed and non-standard-words such as dates, currencies and abbreviations verbalised in the exact way they are spoken (eg. ’july tenth in twenty twenty one’). \n\nAlthough great care was taken in standardisation of orthography, a remaining amount of ambiguity in transcription exists, especially around the use of commas and the choice of introducing sentence breaks for utterances starting with ‘And’. Each sample was then checked by a second human with access to both the original ground truth as well as the independently produced style-consistent transcript. Both versions were merged to produce new high quality ground truths in both the normalised and orthographic text format.",
"## Dataset Information\n\nA data point can be accessed by indexing the dataset object loaded through 'load_dataset':\n\n\n\nA typical data point comprises the path to the audio file and its transcription. Also included is information of the dataset from which the sample derives and a unique identifier name:",
"### Data Fields\n\n- 'audio': a dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate.\n\n- 'ortho_transcript': the orthographic transcription of the audio file.\n\n- 'norm_transcript': the normalised transcription of the audio file.\n\n- 'id': unique id of the data sample.\n\n- 'dataset': string name of a dataset the sample belongs to.\n\nWe encourage participants to train their ASR system on the AMI dataset, the smallest of the 8 ESB datasets, and then evaluate their system on the 'ortho_transcript' for all of the datasets in the diagnostic dataset. This gives a representation of how the system is likely to fare on other audio domains. The predictions can then be _normalised_ by removing casing and punctuation, converting numbers to spelled-out form and expanding abbreviations, and then assessed against the 'norm_transcript'. This gives a representation of the effect of orthography for system performance.",
"### Access\nAll eight of the datasets in ESB are accessible and licensing is freely available. Three of the ESB datasets have specific terms of usage that must be agreed to before using the data. To do so, fill in the access forms on the specific datasets' pages:\n* Common Voice: URL\n* GigaSpeech: URL\n* SPGISpeech: URL",
"### Contributions\nWe show our greatest appreciation to Georg Kucsko, Keenan Freyberg and Michael Shulman from URL for creating and annotating the diagnostic dataset."
] |
e634b6b810e4d30c81b4c6d8262379fe8b9f708c |
# Dataset Card for sberdevices_golos_10h_crowd
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Golos ASR corpus](https://www.openslr.org/114)
- **Repository:** [Golos dataset](https://github.com/sberdevices/golos)
- **Paper:** [Golos: Russian Dataset for Speech Research](https://arxiv.org/pdf/2106.10161.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co/spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Nikolay Karpov](mailto:[email protected])
### Dataset Summary
Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.
Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.
This dataset is a simpler version of the above mentioned Golos:
- it includes the crowd domain only (without any sound from the farfield domain);
- validation split is built on the 1-hour training subset;
- training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
- test split is a full original test split.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co/spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
## Dataset Structure
### Data Instances
A typical data point comprises the audio data, usually called `audio` and its transcription, called `transcription`. Any additional information about the speaker and the passage which contains the transcription is not provided.
```
{'audio': {'path': None,
'array': array([ 3.05175781e-05, 3.05175781e-05, 0.00000000e+00, ...,
-1.09863281e-03, -7.93457031e-04, -1.52587891e-04]), dtype=float64),
'sampling_rate': 16000},
'transcription': 'шестнадцатая часть сезона пять сериала лемони сникет тридцать три несчастья'}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- transcription: the transcription of the audio file.
### Data Splits
This dataset is a simpler version of the original Golos:
- it includes the crowd domain only (without any sound from the farfield domain);
- validation split is built on the 1-hour training subset;
- training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
- test split is a full original test split.
| | Train | Validation | Test |
| ----- | ------ | ---------- | ----- |
| examples | 7993 | 793 | 9994 |
| hours | 8.9h | 0.9h | 11.2h |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
All recorded audio files were manually annotated on the crowd-sourcing platform.
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.
### Licensing Information
[Public license with attribution and conditions reserved](https://github.com/sberdevices/golos/blob/master/license/en_us.pdf)
### Citation Information
```
@misc{karpov2021golos,
author = {Karpov, Nikolay and Denisenko, Alexander and Minkin, Fedor},
title = {Golos: Russian Dataset for Speech Research},
publisher = {arXiv},
year = {2021},
url = {https://arxiv.org/abs/2106.10161}
}
```
### Contributions
Thanks to [@bond005](https://github.com/bond005) for adding this dataset.
| bond005/sberdevices_golos_10h_crowd | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100k",
"source_datasets:extended",
"language:ru",
"license:other",
"arxiv:2106.10161",
"region:us"
] | 2022-10-26T10:12:15+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced", "expert-generated"], "language": ["ru"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100k"], "source_datasets": ["extended"], "task_categories": ["automatic-speech-recognition", "audio-classification"], "paperswithcode_id": "golos", "pretty_name": "Golos"} | 2022-10-27T03:42:07+00:00 | [
"2106.10161"
] | [
"ru"
] | TAGS
#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100k #source_datasets-extended #language-Russian #license-other #arxiv-2106.10161 #region-us
| Dataset Card for sberdevices\_golos\_10h\_crowd
===============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: Golos ASR corpus
* Repository: Golos dataset
* Paper: Golos: Russian Dataset for Speech Research
* Leaderboard: The Speech Bench
* Point of Contact: Nikolay Karpov
### Dataset Summary
Sberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.
Authors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.
This dataset is a simpler version of the above mentioned Golos:
* it includes the crowd domain only (without any sound from the farfield domain);
* validation split is built on the 1-hour training subset;
* training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
* test split is a full original test split.
### Supported Tasks and Leaderboards
* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at URL The leaderboard ranks models uploaded to the Hub based on their WER.
### Languages
The audio is in Russian.
Dataset Structure
-----------------
### Data Instances
A typical data point comprises the audio data, usually called 'audio' and its transcription, called 'transcription'. Any additional information about the speaker and the passage which contains the transcription is not provided.
### Data Fields
* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
* transcription: the transcription of the audio file.
### Data Splits
This dataset is a simpler version of the original Golos:
* it includes the crowd domain only (without any sound from the farfield domain);
* validation split is built on the 1-hour training subset;
* training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;
* test split is a full original test split.
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
All recorded audio files were manually annotated on the crowd-sourcing platform.
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
The dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.
### Licensing Information
Public license with attribution and conditions reserved
### Contributions
Thanks to @bond005 for adding this dataset.
| [
"### Dataset Summary\n\n\nSberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.\n\n\nAuthors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.\n\n\nThis dataset is a simpler version of the above mentioned Golos:\n\n\n* it includes the crowd domain only (without any sound from the farfield domain);\n* validation split is built on the 1-hour training subset;\n* training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;\n* test split is a full original test split.",
"### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at URL The leaderboard ranks models uploaded to the Hub based on their WER.",
"### Languages\n\n\nThe audio is in Russian.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the audio data, usually called 'audio' and its transcription, called 'transcription'. Any additional information about the speaker and the passage which contains the transcription is not provided.",
"### Data Fields\n\n\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* transcription: the transcription of the audio file.",
"### Data Splits\n\n\nThis dataset is a simpler version of the original Golos:\n\n\n* it includes the crowd domain only (without any sound from the farfield domain);\n* validation split is built on the 1-hour training subset;\n* training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;\n* test split is a full original test split.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nAll recorded audio files were manually annotated on the crowd-sourcing platform.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.",
"### Licensing Information\n\n\nPublic license with attribution and conditions reserved",
"### Contributions\n\n\nThanks to @bond005 for adding this dataset."
] | [
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #annotations_creators-expert-generated #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100k #source_datasets-extended #language-Russian #license-other #arxiv-2106.10161 #region-us \n",
"### Dataset Summary\n\n\nSberdevices Golos is a corpus of approximately 1200 hours of 16kHz Russian speech from crowd (reading speech) and farfield (communication with smart devices) domains, prepared by SberDevices Team (Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov). The data is derived from the crowd-sourcing platform, and has been manually annotated.\n\n\nAuthors divide all dataset into train and test subsets. The training subset includes approximately 1000 hours. For experiments with a limited number of records, authors identified training subsets of shorter length: 100 hours, 10 hours, 1 hour, 10 minutes.\n\n\nThis dataset is a simpler version of the above mentioned Golos:\n\n\n* it includes the crowd domain only (without any sound from the farfield domain);\n* validation split is built on the 1-hour training subset;\n* training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;\n* test split is a full original test split.",
"### Supported Tasks and Leaderboards\n\n\n* 'automatic-speech-recognition': The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at URL The leaderboard ranks models uploaded to the Hub based on their WER.",
"### Languages\n\n\nThe audio is in Russian.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA typical data point comprises the audio data, usually called 'audio' and its transcription, called 'transcription'. Any additional information about the speaker and the passage which contains the transcription is not provided.",
"### Data Fields\n\n\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* transcription: the transcription of the audio file.",
"### Data Splits\n\n\nThis dataset is a simpler version of the original Golos:\n\n\n* it includes the crowd domain only (without any sound from the farfield domain);\n* validation split is built on the 1-hour training subset;\n* training split corresponds to the 10-hour training subset without sounds from the 1-hour training subset;\n* test split is a full original test split.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nAll recorded audio files were manually annotated on the crowd-sourcing platform.",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThe dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was initially created by Alexander Denisenko, Angelina Kovalenko, Fedor Minkin, and Nikolay Karpov.",
"### Licensing Information\n\n\nPublic license with attribution and conditions reserved",
"### Contributions\n\n\nThanks to @bond005 for adding this dataset."
] |
fd04a127b3d6801afbe4ba38b66c98d0de647e01 |
# Winter Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by winter_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/oVqfSZ2.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/p0cslGJ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/LJmGvsc.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/T4I0gFQ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/hzfmsA8.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/winter_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-26T10:28:44+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-26T19:45:11+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Winter Style Embedding / Textual Inversion
==========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the from the file name and replace the 10k steps ver in your folder
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
209818b23654f0057dbac7bb86b6bba4c95d82d1 |
# KPBiomed, A Large-Scale Dataset for keyphrase generation
## About
This dataset is made of 5.6 million abstracts with author assigned keyphrases.
Details about the dataset can be found in the original paper:
Maël Houbre, Florian Boudin and Béatrice Daille. 2022. [A Large-Scale Dataset for Biomedical Keyphrase Generation](https://arxiv.org/abs/2211.12124). In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI 2022).
Reference (author-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper:
- Florian Boudin and Ygor Gallina. 2021.
[Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/).
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.
## Content
The details of the dataset are in the table below:
| Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen |
| :----------- | ----------: | ---------------------------------: | --------: | ----------: | ------: | -------: |
| Train small | 500k | 5.24 | 66.31 | 7.16 | 12.60 | 13.93 |
| Train medium | 2M | 5.24 | 66.30 | 7.18 | 12.57 | 13.95 |
| Train large | 5.6M | 5.23 | 66.32 | 7.18 | 12.55 | 13.95 |
| Validation | 20k | 5.25 | 66.44 | 7.07 | 12.45 | 14.05 |
| Test | 20k | 5.22 | 66.59 | 7.22 | 12.44 | 13.75 |
The following data fields are available:
- **id**: unique identifier of the document.
- **title**: title of the document.
- **abstract**: abstract of the document.
- **keyphrases**: list of reference keyphrases.
- **mesh terms**: list of indexer assigned MeSH terms if available (around 68% of the articles)
- **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases.
- **authors**: list of the article's authors
- **year**: publication year
**NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + text).
| taln-ls2n/kpbiomed | [
"task_categories:text-generation",
"annotations_creators:unknown",
"language_creators:unknown",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2211.12124",
"region:us"
] | 2022-10-26T12:41:01+00:00 | {"annotations_creators": ["unknown"], "language_creators": ["unknown"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["text-mining", "text-generation"], "task_ids": ["keyphrase-generation", "keyphrase-extraction"], "pretty_name": "KP-Biomed"} | 2022-12-01T10:52:09+00:00 | [
"2211.12124"
] | [
"en"
] | TAGS
#task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-cc-by-nc-4.0 #arxiv-2211.12124 #region-us
| KPBiomed, A Large-Scale Dataset for keyphrase generation
========================================================
About
-----
This dataset is made of 5.6 million abstracts with author assigned keyphrases.
Details about the dataset can be found in the original paper:
Maël Houbre, Florian Boudin and Béatrice Daille. 2022. A Large-Scale Dataset for Biomedical Keyphrase Generation. In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI 2022).
Reference (author-assigned) keyphrases are also categorized under the PRMU (Present-Reordered-Mixed-Unseen) scheme as proposed in the following paper:
* Florian Boudin and Ygor Gallina. 2021.
Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness.
In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics.
Text pre-processing (tokenization) is carried out using spacy (en\_core\_web\_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text.
Content
-------
The details of the dataset are in the table below:
The following data fields are available:
* id: unique identifier of the document.
* title: title of the document.
* abstract: abstract of the document.
* keyphrases: list of reference keyphrases.
* mesh terms: list of indexer assigned MeSH terms if available (around 68% of the articles)
* prmu: list of Present-Reordered-Mixed-Unseen categories for reference keyphrases.
* authors: list of the article's authors
* year: publication year
NB: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + text).
| [] | [
"TAGS\n#task_categories-text-generation #annotations_creators-unknown #language_creators-unknown #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-cc-by-nc-4.0 #arxiv-2211.12124 #region-us \n"
] |
13f26365766f8f61eea21bf45d65936aaaa70db8 |
# Brush Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by brush_style"```
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the ```"-7500"``` from the file name and replace the 10k steps ver in your folder
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/Mp2F6GR.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/a2Cmqb4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/YwSafu4.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/fCFSIs5.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/S8v6sXG.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/brush_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-26T15:36:36+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-29T09:50:13+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Brush Style Embedding / Textual Inversion
=========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 10000 steps
I added a 7.5k steps trained ver in the files aswell. If you want to use that version, remove the from the file name and replace the 10k steps ver in your folder
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
4d83c979660e9d000bcd08a9b91093e8dca3eff5 | # Dataset Card for "img-256-shinkai-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | woctordho/img-256-shinkai-2 | [
"region:us"
] | 2022-10-26T16:40:48+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 11515086349.93, "num_examples": 811410}], "download_size": 11660877157, "dataset_size": 11515086349.93}} | 2022-11-19T23:35:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "img-256-shinkai-2"
More Information needed | [
"# Dataset Card for \"img-256-shinkai-2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"img-256-shinkai-2\"\n\nMore Information needed"
] |
61f4efc23daf87b98918ca90c359e9bb8f92a900 | Cómo reclamar los daños después de un apagón eléctrico: las indemnizaciones que debe costear la empresa tras cortar el suministro | Aserehe6546545/Ghgfgg | [
"region:us"
] | 2022-10-26T18:21:00+00:00 | {} | 2022-10-26T18:22:13+00:00 | [] | [] | TAGS
#region-us
| Cómo reclamar los daños después de un apagón eléctrico: las indemnizaciones que debe costear la empresa tras cortar el suministro | [] | [
"TAGS\n#region-us \n"
] |
1671bffd719c8370d046334203752f9a2459ca54 | # Dataset Card for "img-256-danbooru"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | woctordho/img-256-danbooru | [
"region:us"
] | 2022-10-26T18:24:12+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 23138302451.076, "num_examples": 990501}], "download_size": 23099440688, "dataset_size": 23138302451.076}} | 2022-11-19T20:50:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "img-256-danbooru"
More Information needed | [
"# Dataset Card for \"img-256-danbooru\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"img-256-danbooru\"\n\nMore Information needed"
] |
ac2f44906b2ed4f46bf547b7db4c055cb10b601b | # Dataset Card for "shape-scenes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | f-biondi/shape-scenes | [
"region:us"
] | 2022-10-26T19:26:33+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 558709806.875, "num_examples": 97881}], "download_size": 317164682, "dataset_size": 558709806.875}} | 2022-10-26T19:27:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "shape-scenes"
More Information needed | [
"# Dataset Card for \"shape-scenes\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"shape-scenes\"\n\nMore Information needed"
] |
fd3366545ad353723966836cc25f1ed10b7ef355 |
# Description
This dataset is a subset of [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) and Multilingual [CommonVoice](commonvoice.mozilla.org/) that have been adversarially modified to fool [Whisper](https://huggingface.co/openai/whisper-medium) ASR model.
Original [source code](https://github.com/RaphaelOlivier/whisper_attack).
The raw [tar files](https://data.mendeley.com/datasets/96dh52hz9r).
# Configurations and splits
* The `targeted` config contains targeted adversarial examples. When successful, they fool Whisper into predicting the sentence `OK Google, browse to evil.com` even if the input is entirely different. We provide a split for each Whisper model, and one containing the original, unmodified inputs
* The `untargeted-35` and `untargeted-40` configs contain untargeted adversarial examples, with average Signal-Noise Ratios of 35dB and 40dB respectively. They fool Whisper into predicting erroneous transcriptions. We provide a split for each Whisper model, and one containing the original, unmodified inputs
* The `language-<lang> configs contain adversarial examples in language <lang> that fool Whisper in predicting the wrong language. Split `<lang>.<target_lang>` contain inputs that Whisper perceives as <target_lang>, and split `<lang>.original` contains the original inputs in language <lang>. We use 3 target languages (English, Tagalog and Serbian) and 7 source languages (English, Italian, Indonesian, Danish, Czech, Lithuanian and Armenian).
# Usage
Here is an example of code using this dataset:
```python
model_name="whisper-medium"
config_name="targeted"
split_name="whisper.medium"
hub_path = "openai/whisper-"+model_name
processor = WhisperProcessor.from_pretrained(hub_path)
model = WhisperForConditionalGeneration.from_pretrained(hub_path).to("cuda")
dataset = load_dataset("RaphaelOlivier/whisper_adversarial_examples",config_name ,split=split_name)
def map_to_pred(batch):
input_features = processor(batch["audio"][0]["array"], return_tensors="pt").input_features
predicted_ids = model.generate(input_features.to("cuda"))
transcription = processor.batch_decode(predicted_ids, normalize = True)
batch['text'][0] = processor.tokenizer._normalize(batch['text'][0])
batch["transcription"] = transcription
return batch
result = dataset.map(map_to_pred, batched=True, batch_size=1)
wer = load("wer")
for t in zip(result["text"],result["transcription"]):
print(t)
print(wer.compute(predictions=result["text"], references=result["transcription"]))
``` | RaphaelOlivier/whisper_adversarial_examples | [
"license:cc-by-4.0",
"region:us"
] | 2022-10-26T19:29:43+00:00 | {"license": "cc-by-4.0"} | 2022-11-03T21:48:16+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
# Description
This dataset is a subset of LibriSpeech and Multilingual CommonVoice that have been adversarially modified to fool Whisper ASR model.
Original source code.
The raw tar files.
# Configurations and splits
* The 'targeted' config contains targeted adversarial examples. When successful, they fool Whisper into predicting the sentence 'OK Google, browse to URL' even if the input is entirely different. We provide a split for each Whisper model, and one containing the original, unmodified inputs
* The 'untargeted-35' and 'untargeted-40' configs contain untargeted adversarial examples, with average Signal-Noise Ratios of 35dB and 40dB respectively. They fool Whisper into predicting erroneous transcriptions. We provide a split for each Whisper model, and one containing the original, unmodified inputs
* The 'language-<lang> configs contain adversarial examples in language <lang> that fool Whisper in predicting the wrong language. Split '<lang>.<target_lang>' contain inputs that Whisper perceives as <target_lang>, and split '<lang>.original' contains the original inputs in language <lang>. We use 3 target languages (English, Tagalog and Serbian) and 7 source languages (English, Italian, Indonesian, Danish, Czech, Lithuanian and Armenian).
# Usage
Here is an example of code using this dataset:
| [
"# Description\nThis dataset is a subset of LibriSpeech and Multilingual CommonVoice that have been adversarially modified to fool Whisper ASR model. \n\nOriginal source code.\n\nThe raw tar files.",
"# Configurations and splits\n* The 'targeted' config contains targeted adversarial examples. When successful, they fool Whisper into predicting the sentence 'OK Google, browse to URL' even if the input is entirely different. We provide a split for each Whisper model, and one containing the original, unmodified inputs\n* The 'untargeted-35' and 'untargeted-40' configs contain untargeted adversarial examples, with average Signal-Noise Ratios of 35dB and 40dB respectively. They fool Whisper into predicting erroneous transcriptions. We provide a split for each Whisper model, and one containing the original, unmodified inputs\n* The 'language-<lang> configs contain adversarial examples in language <lang> that fool Whisper in predicting the wrong language. Split '<lang>.<target_lang>' contain inputs that Whisper perceives as <target_lang>, and split '<lang>.original' contains the original inputs in language <lang>. We use 3 target languages (English, Tagalog and Serbian) and 7 source languages (English, Italian, Indonesian, Danish, Czech, Lithuanian and Armenian).",
"# Usage\n\nHere is an example of code using this dataset:"
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# Description\nThis dataset is a subset of LibriSpeech and Multilingual CommonVoice that have been adversarially modified to fool Whisper ASR model. \n\nOriginal source code.\n\nThe raw tar files.",
"# Configurations and splits\n* The 'targeted' config contains targeted adversarial examples. When successful, they fool Whisper into predicting the sentence 'OK Google, browse to URL' even if the input is entirely different. We provide a split for each Whisper model, and one containing the original, unmodified inputs\n* The 'untargeted-35' and 'untargeted-40' configs contain untargeted adversarial examples, with average Signal-Noise Ratios of 35dB and 40dB respectively. They fool Whisper into predicting erroneous transcriptions. We provide a split for each Whisper model, and one containing the original, unmodified inputs\n* The 'language-<lang> configs contain adversarial examples in language <lang> that fool Whisper in predicting the wrong language. Split '<lang>.<target_lang>' contain inputs that Whisper perceives as <target_lang>, and split '<lang>.original' contains the original inputs in language <lang>. We use 3 target languages (English, Tagalog and Serbian) and 7 source languages (English, Italian, Indonesian, Danish, Czech, Lithuanian and Armenian).",
"# Usage\n\nHere is an example of code using this dataset:"
] |
3446dd8617356de7b1980ebfc0a50b946eb21de3 | # Dataset Card for "img-256-photo-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | woctordho/img-256-photo-2 | [
"region:us"
] | 2022-10-26T20:08:43+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 12133208417.44, "num_examples": 996698}], "download_size": 11930597168, "dataset_size": 12133208417.44}} | 2022-11-20T02:56:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "img-256-photo-2"
More Information needed | [
"# Dataset Card for \"img-256-photo-2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"img-256-photo-2\"\n\nMore Information needed"
] |
ce79dcfb8e000cbac80111f73c64d368997230ad | # Dataset Card for "codeparrot-valid-more-filtering-debug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kejian/codeparrot-valid-more-filtering-debug | [
"region:us"
] | 2022-10-26T20:21:58+00:00 | {"dataset_info": {"features": [{"name": "repo_name", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "copies", "dtype": "string"}, {"name": "size", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "hash", "dtype": "int64"}, {"name": "line_mean", "dtype": "float64"}, {"name": "line_max", "dtype": "int64"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "autogenerated", "dtype": "bool"}, {"name": "ratio", "dtype": "float64"}, {"name": "config_test", "dtype": "bool"}, {"name": "has_no_keywords", "dtype": "bool"}, {"name": "few_assignments", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 957026, "num_examples": 100}], "download_size": 357047, "dataset_size": 957026}} | 2022-10-26T20:22:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "codeparrot-valid-more-filtering-debug"
More Information needed | [
"# Dataset Card for \"codeparrot-valid-more-filtering-debug\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"codeparrot-valid-more-filtering-debug\"\n\nMore Information needed"
] |
7cdae06c98ca54f8892daf6a80efb4a9d8a2abd0 |
# MiCRO: Multi-interest Candidate Retrieval Online
[](http://makeapullrequest.com)
[](https://arxiv.org/abs/2210.16271)
This repo contains the TwitterFaveGraph dataset from our paper [MiCRO: Multi-interest Candidate Retrieval Online](). <br />
[[PDF]](https://arxiv.org/pdf/2210.16271.pdf)
[[HuggingFace Datasets]](https://huggingface.co/Twitter)
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
## TwitterFaveGraph
TwitterFaveGraph is a bipartite directed graph of user nodes to Tweet nodes where an edge represents a "fave" engagement. Each edge is binned into predetermined time chunks which are assigned as ordinals. These ordinals are contiguous and respect time ordering. In total TwitterFaveGraph has 6.7M user nodes, 13M Tweet nodes, and 283M edges. The maximum degree for users is 100 and the minimum degree for users is 1. The maximum
degree for Tweets is 280k and the minimum degree for Tweets is 5.
The data format is displayed below.
| user_index | tweet_index | time_chunk |
| ------------- | ------------- | ---- |
| 1 | 2 | 1 |
| 2 | 1 | 1 |
| 3 | 3 | 2 |
## Citation
If you use TwitterFaveGraph in your work, please cite the following:
```bib
@article{portman2022micro,
title={MiCRO: Multi-interest Candidate Retrieval Online},
author={Portman, Frank and Ragain, Stephen and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2210.16271},
year={2022}
}
``` | Twitter/TwitterFaveGraph | [
"license:cc-by-4.0",
"arxiv:2210.16271",
"region:us"
] | 2022-10-26T23:44:43+00:00 | {"license": "cc-by-4.0"} | 2022-10-31T23:58:49+00:00 | [
"2210.16271"
] | [] | TAGS
#license-cc-by-4.0 #arxiv-2210.16271 #region-us
| MiCRO: Multi-interest Candidate Retrieval Online
================================================

](http://makeapullrequest.com)
[](https://arxiv.org/pdf/2205.06205.pdf)
This repo contains the TwitterFaveGraph dataset from our paper [kNN-Embed: Locally Smoothed Embedding Mixtures For Multi-interest Candidate Retrieval](https://arxiv.org/pdf/2205.06205.pdf). <br />
[[PDF]]()
[[HuggingFace Datasets]](https://huggingface.co/Twitter)
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
## TwitterFollowGraph
TwitterFollowGraph is a bipartite directed graph of users (consumer) nodes to author (producer) nodes where an edge represents a user "following" an author engagement. Each edge is binned into predetermined time chunks which are denoted with ordinals. These ordinals are contiguous and respect time ordering of engagements. In total TwitterFollowGraph has 261𝑀 edges and 15.5𝑀 vertices, with a max-degree of 900𝐾 and a min-degree of 5.
The data format is displayed below.
| user_index | author_index | time_chunk |
| ------------- | ------------- | ---- |
| 1 | 2 | 1 |
| 2 | 1 | 2 |
| 3 | 3 | 2 |
## Citation
If you use TwitterFollowGraph in your work, please cite the following:
```bib
@article{el2022knn,
title={kNN-Embed: Locally Smoothed Embedding Mixtures For Multi-interest Candidate Retrieval},
author={El-Kishky, Ahmed and Markovich, Thomas and Leung, Kenny and Portman, Frank and Haghighi, Aria and Xiao, Ying},
journal={arXiv preprint arXiv:2205.06205},
year={2022}
}
``` | Twitter/TwitterFollowGraph | [
"license:cc-by-4.0",
"arxiv:2205.06205",
"region:us"
] | 2022-10-27T00:01:25+00:00 | {"license": "cc-by-4.0"} | 2022-10-31T23:55:05+00:00 | [
"2205.06205"
] | [] | TAGS
#license-cc-by-4.0 #arxiv-2205.06205 #region-us
| kNN-Embed: Locally Smoothed Embedding Mixtures For Multi-interest Candidate Retrieval
=====================================================================================

 nodes to author (producer) nodes where an edge represents a user "following" an author engagement. Each edge is binned into predetermined time chunks which are denoted with ordinals. These ordinals are contiguous and respect time ordering of engagements. In total TwitterFollowGraph has 261𝑀 edges and 15.5𝑀 vertices, with a max-degree of 900𝐾 and a min-degree of 5.
The data format is displayed below.
user\_index: 1, author\_index: 2, time\_chunk: 1
user\_index: 2, author\_index: 1, time\_chunk: 2
user\_index: 3, author\_index: 3, time\_chunk: 2
If you use TwitterFollowGraph in your work, please cite the following:
| [] | [
"TAGS\n#license-cc-by-4.0 #arxiv-2205.06205 #region-us \n"
] |
5b1dd4215db57c070673a560981545a3310ed9ee | #Overview
This is a dataset I am using for my thesis project Myaamia Translator.
<p style="color: darkred">This is not meant to be used for production yet</p>
<i>I just want to try out a few things.</i> | bishalbaaniya/myaamia_english | [
"region:us"
] | 2022-10-27T00:32:57+00:00 | {} | 2022-10-27T00:54:46+00:00 | [] | [] | TAGS
#region-us
| #Overview
This is a dataset I am using for my thesis project Myaamia Translator.
<p style="color: darkred">This is not meant to be used for production yet</p>
<i>I just want to try out a few things.</i> | [] | [
"TAGS\n#region-us \n"
] |
98c3bf49ac85d8b9fd593a22a414322cbd9ecb36 |
# League Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file, as well as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"art by league_style-1000-[number of steps for the version you chose]"```
For example, if you chose the 11.5k steps ver, it would be ```"art by league_style-1000-11500"```
If it is to strong just add [] around it.
The general ver I recommend is 11.5k steps, however I added a 4k steps and 12k steps trained ver in the files as well. 4k steps tends towards making nice glasses, and 12k steps seems to be better at poses rather than closeups.
If you'd like to support the amazing artists whose artwork contributed to this embedding's training, I'd highly recommend you check out [Alex Flores](https://www.artstation.com/alexflores), [Chengwei Pan](https://www.artstation.com/pan), [Horace Hsu](https://www.artstation.com/hozure), [Jem Flores](https://www.artstation.com/jemflores), [SIXMOREVODKA STUDIO](https://www.artstation.com/sixmorevodka), and [West Studio](https://www.artstation.com/weststudio).
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/CP3dcox.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/3uJpYO9.png width=100% height=100%/></td>
</tr>
<tr>
<td><img src=https://i.imgur.com/3mi25aA.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | grullborg/league_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-10-27T00:53:50+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-10-27T01:27:20+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| League Style Embedding / Textual Inversion
==========================================
Usage
-----
To use this embedding you have to download the file, as well as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
For example, if you chose the 11.5k steps ver, it would be
If it is to strong just add [] around it.
The general ver I recommend is 11.5k steps, however I added a 4k steps and 12k steps trained ver in the files as well. 4k steps tends towards making nice glasses, and 12k steps seems to be better at poses rather than closeups.
If you'd like to support the amazing artists whose artwork contributed to this embedding's training, I'd highly recommend you check out Alex Flores, Chengwei Pan, Horace Hsu, Jem Flores, SIXMOREVODKA STUDIO, and West Studio.
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
52d78f738b103421956771b5ae8a3c5fd506a8c5 |  | filevich/t1k22 | [
"region:us"
] | 2022-10-27T01:20:22+00:00 | {} | 2024-02-01T22:08:15+00:00 | [] | [] | TAGS
#region-us
| !Screenshot from 2022-06-20 URL | [] | [
"TAGS\n#region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.