sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
5b631720ed23ce3367f2326eee0e4663e4274929 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: facebook/wmt19-en-de
* Dataset: wmt19
* Config: de-en
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-wmt19-de-en-04c9e1-2082967144 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:37:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["wmt19"], "eval_info": {"task": "translation", "model": "facebook/wmt19-en-de", "metrics": [], "dataset_name": "wmt19", "dataset_config": "de-en", "dataset_split": "validation", "col_mapping": {"source": "translation.en", "target": "translation.de"}}} | 2022-11-14T09:40:55+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Translation
* Model: facebook/wmt19-en-de
* Dataset: wmt19
* Config: de-en
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @WillHeld for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: facebook/wmt19-en-de\n* Dataset: wmt19\n* Config: de-en\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: facebook/wmt19-en-de\n* Dataset: wmt19\n* Config: de-en\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] |
f32c7211c0ac30a750b3fc382a8a3bf880efd44c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-02148524-0081-4ca2-963d-7e44c726ec75-1311 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:40:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T09:40:38+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
748f9dc5044e188c60bbe9aadd91b61b9e032c30 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667122 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:42:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:46:08+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
40c04a6a5193bca2029e35a7a50e945e69a55aea | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-0d414f0c-bce8-44f6-9c83-f356bfaf679d-1412 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:42:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T09:43:19+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
07aecb1e8d8a44720b52a7c8a6cf1e905ad2acce | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667123 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:54:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:06:28+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
d828e884d8d6d9c8e33da4b2e66c852a38df67a2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-273d91c9-dc40-4345-bb99-8afa33082ce8-1513 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:54:07+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T09:54:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
f81fd31a5fae77bb6fee6de66ccc0db474c2049f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067131 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:57:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:21:26+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @WillHeld for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] |
9d7c55692e372e87fe5a7d291e244bab84ff5a9e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-add2aed1-25d6-4cd6-9646-ff8855a9d1a4-1614 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:59:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T10:00:28+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
8dd7772dfee471c60cc36decc221d4b5b507091c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067132 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:00:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T11:59:20+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @WillHeld for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-7b1\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-7b1\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] |
c55684216bee9eac1c9150f30d9926eb3825b0e6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067133 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:15:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:55:00+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @WillHeld for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] |
8334e7723b14d0e56beac90446aa22960af5a0c9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-9a279865-5267-44c3-8be5-f8885af614f3-1715 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:19:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T10:19:38+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
100d881c253a7d035636b6de0297248093f088df | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067134 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:26:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:59:32+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @WillHeld for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b7\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b7\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] |
75f16c33ac974de771fb2bed632b0b098a1bc5a0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-WillHeld__stereoset_zero-WillHeld__stereoset_zero-7a6673-2074067135 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:29:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["WillHeld/stereoset_zero"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "WillHeld/stereoset_zero", "dataset_config": "WillHeld--stereoset_zero", "dataset_split": "train", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:49:54+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: WillHeld/stereoset_zero
* Config: WillHeld--stereoset_zero
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @WillHeld for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: WillHeld/stereoset_zero\n* Config: WillHeld--stereoset_zero\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] |
5fe45167e722e5a3ebf13d083c49080e3edd65e8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067145 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:31:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:41:15+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-7b1\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-7b1\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
1b8248affdb664ba0aa8e9d21ddcc61443f85f62 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067146 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:31:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T11:51:01+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
7b2a00c90ab7898d94ecca9300987379b1636fa7 |
DIODE dataset: https://diode-dataset.org/
Code to prepare the archive: TBA | sayakpaul/diode-subset-train | [
"license:mit",
"depth-estimation",
"region:us"
] | 2022-11-14T10:36:21+00:00 | {"license": "mit", "tags": ["depth-estimation"]} | 2022-11-15T06:32:49+00:00 | [] | [] | TAGS
#license-mit #depth-estimation #region-us
|
DIODE dataset: URL
Code to prepare the archive: TBA | [] | [
"TAGS\n#license-mit #depth-estimation #region-us \n"
] |
ebeea77810c9218ff8bde4129a4dec6173b82e13 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067147 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:56:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:01:39+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b7\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b7\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
4f2420692f3798d8a47133bed141fcc78fe491ee | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067148 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T10:56:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T11:44:55+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
297593418be2802a97602d976e5e0838c6271235 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-78963b-2087067149 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:01:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:01:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
a831b9c804f632f7ca8edcbebd7c4196efb84365 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167150 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:01:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:20:24+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-7b1\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-7b1\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
3cb1fc8a34aaee20357c43a99310bf991caa9aeb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167151 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:04:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:31:57+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
5ac34ce5a4b774a7e2411dba7d1eee9e7dae6ea1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167152 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:05:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:16:37+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b7\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b7\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
dc7a264f0737e94de2d896f96eb5a0cdbdd475f9 |
# Dataset Card for BnL Newspapers 1841-1879
## Table of Contents
- [Dataset Card for bnl_newspapers1841-1879](#dataset-card-for-bnl_newspapers1841-1879)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [size of dataset](#size-of-dataset)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://data.bnl.lu](https://data.bnl.lu)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** opendata at bnl.etat.lu
### Dataset Summary
630.709 articles from historical newspapers (1841-1879) along with metadata and the full text.
21 newspaper titles
24.415 newspaper issues
99.957 scanned pages
Transcribed using a variety of OCR engines and corrected using [https://github.com/natliblux/nautilusocr](https://github.com/natliblux/nautilusocr) (95% threshold)
Public Domain, CC0 (See copyright notice)
The newspapers used are:
- Der Arbeiter (1878)
- L'Arlequin (1848-1848)
- L'Avenir (1868-1871)
- Courrier du Grand-Duché de Luxembourg (1844-1868)
- Cäcilia (1863-1871)
- Diekircher Wochenblatt (1841-1848)
- Le Gratis luxembourgeois (1857-1858)
- L'Indépendance luxembourgeoise (1871-1879)
- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1879)
- La Gazette du Grand-Duché de Luxembourg (1878)
- Luxemburger Anzeiger (1856)
- Luxemburger Bauernzeitung (1857)
- Luxemburger Volks-Freund (1869-1876)
- Luxemburger Wort (1848-1879)
- Luxemburger Zeitung (1844-1845)
- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)
- L'Union (1860-1871)
- Das Vaterland (1869-1870)
- Der Volksfreund (1848-1849)
- Der Wächter an der Sauer (1849-1869)
- D'Wäschfra (1868-1879)
### Supported Tasks and Leaderboards
### Languages
German, French, Luxembourgish
## Dataset Structure
JSONL file zipped.
### Data Instances
### Data Fields
- `identifier` : unique and persistent identifier using ARK for the Article.
- `date` : publishing date of the document e.g "1848-12-15".
- `metsType` : set to "newspaper".
- `newpaperTitle` : title of the newspaper. It is transcribed as in the masthead of the individual issue and can thus change.
- `paperID` : local identifier for the newspaper title. It remains the same, even for short-term title changes.
- `publisher` : publisher of the document e.g. "Verl. der St-Paulus-Druckerei".
- `title` : main title of the article, section, advertisement, etc.
- `text` : full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines.
- `creator` : author of the article, section, advertisement etc. Most articles do not have an associated author.
- `type` : type of the exported data e.g. ARTICLE, SECTION, ADVERTISEMENT, ...
## Dataset Creation
The dataset was created by the National library of Luxembourg with the output of its newspaper digitisation program.
### Curation Rationale
The selection of newspapers represent the current state of digitisation of the Luxembourg legal deposit collection of newspapers that are in the public domain. That means all newspapers printed in Luxembourg before and including 1879.
### Source Data
Printed historical newspapers.
#### Initial Data Collection and Normalization
The data was created through digitisation. The full digitisation specifications are available at [https://data.bnl.lu/data/historical-newspapers/](https://data.bnl.lu/data/historical-newspapers/)
### Annotations
#### Annotation process
During the digitisation process, newspaper pages were semi-automatically zoned into articles. This was done by external suppliers to the library according to the digitisation specifications.
#### Who are the annotators?
Staff at the external suppliers.
### Personal and Sensitive Information
The dataset contains only data that was published in a newspaper. Since it contains only articles before 1879, no living person is expected to be included.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
The biases in the text represent the biases from newspaper editors and journalists at the time of the publication. In particular during the period from 1940/05/10 to 1944/09/10 the Nazi occupier controlled all information published.
### Other Known Limitations
The OCR transcription is not perfect. It is estimated that the quality is 95% or better.
## Additional Information
### size of dataset
500MB-2GB
### Dataset Curators
This dataset is curated by the national library of Luxembourg (opendata at bnl.etat.lu).
### Licensing Information
Creative Commons Public Domain Dedication and Certification
### Citation Information
```
@misc{bnl_newspapers,
title={Historical Newspapers},
url={https://data.bnl.lu/data/historical-newspapers/},
author={ Bibliothèque nationale du Luxembourg},
```
### Contributions
Thanks to [@ymaurer](https://github.com/ymaurer) for adding this dataset. | biglam/bnl_newspapers1841-1879 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:de",
"language:fr",
"language:lb",
"language:nl",
"language:la",
"language:en",
"license:cc0-1.0",
"newspapers",
"1800-1900",
"lam",
"region:us"
] | 2022-11-14T11:37:16+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["de", "fr", "lb", "nl", "la", "en"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "BnL Newspapers 1841-1879", "tags": ["newspapers", "1800-1900", "lam"], "dataset_info": {"features": [{"name": "publisher", "dtype": "string"}, {"name": "paperID", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "newpaperTitle", "dtype": "string"}, {"name": "date", "dtype": "timestamp[ns]"}, {"name": "metsType", "dtype": "string"}, {"name": "identifier", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "creator", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1605420260, "num_examples": 630709}], "download_size": 1027493424, "dataset_size": 1605420260}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-30T13:23:29+00:00 | [] | [
"de",
"fr",
"lb",
"nl",
"la",
"en"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-German #language-French #language-Luxembourgish #language-Dutch #language-Latin #language-English #license-cc0-1.0 #newspapers #1800-1900 #lam #region-us
|
# Dataset Card for BnL Newspapers 1841-1879
## Table of Contents
- Dataset Card for bnl_newspapers1841-1879
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Curation Rationale
- Source Data
- Initial Data Collection and Normalization
- Annotations
- Annotation process
- Who are the annotators?
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- size of dataset
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Leaderboard:
- Point of Contact: opendata at URL
### Dataset Summary
630.709 articles from historical newspapers (1841-1879) along with metadata and the full text.
21 newspaper titles
24.415 newspaper issues
99.957 scanned pages
Transcribed using a variety of OCR engines and corrected using URL (95% threshold)
Public Domain, CC0 (See copyright notice)
The newspapers used are:
- Der Arbeiter (1878)
- L'Arlequin (1848-1848)
- L'Avenir (1868-1871)
- Courrier du Grand-Duché de Luxembourg (1844-1868)
- Cäcilia (1863-1871)
- Diekircher Wochenblatt (1841-1848)
- Le Gratis luxembourgeois (1857-1858)
- L'Indépendance luxembourgeoise (1871-1879)
- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1879)
- La Gazette du Grand-Duché de Luxembourg (1878)
- Luxemburger Anzeiger (1856)
- Luxemburger Bauernzeitung (1857)
- Luxemburger Volks-Freund (1869-1876)
- Luxemburger Wort (1848-1879)
- Luxemburger Zeitung (1844-1845)
- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)
- L'Union (1860-1871)
- Das Vaterland (1869-1870)
- Der Volksfreund (1848-1849)
- Der Wächter an der Sauer (1849-1869)
- D'Wäschfra (1868-1879)
### Supported Tasks and Leaderboards
### Languages
German, French, Luxembourgish
## Dataset Structure
JSONL file zipped.
### Data Instances
### Data Fields
- 'identifier' : unique and persistent identifier using ARK for the Article.
- 'date' : publishing date of the document e.g "1848-12-15".
- 'metsType' : set to "newspaper".
- 'newpaperTitle' : title of the newspaper. It is transcribed as in the masthead of the individual issue and can thus change.
- 'paperID' : local identifier for the newspaper title. It remains the same, even for short-term title changes.
- 'publisher' : publisher of the document e.g. "Verl. der St-Paulus-Druckerei".
- 'title' : main title of the article, section, advertisement, etc.
- 'text' : full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines.
- 'creator' : author of the article, section, advertisement etc. Most articles do not have an associated author.
- 'type' : type of the exported data e.g. ARTICLE, SECTION, ADVERTISEMENT, ...
## Dataset Creation
The dataset was created by the National library of Luxembourg with the output of its newspaper digitisation program.
### Curation Rationale
The selection of newspapers represent the current state of digitisation of the Luxembourg legal deposit collection of newspapers that are in the public domain. That means all newspapers printed in Luxembourg before and including 1879.
### Source Data
Printed historical newspapers.
#### Initial Data Collection and Normalization
The data was created through digitisation. The full digitisation specifications are available at URL
### Annotations
#### Annotation process
During the digitisation process, newspaper pages were semi-automatically zoned into articles. This was done by external suppliers to the library according to the digitisation specifications.
#### Who are the annotators?
Staff at the external suppliers.
### Personal and Sensitive Information
The dataset contains only data that was published in a newspaper. Since it contains only articles before 1879, no living person is expected to be included.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
The biases in the text represent the biases from newspaper editors and journalists at the time of the publication. In particular during the period from 1940/05/10 to 1944/09/10 the Nazi occupier controlled all information published.
### Other Known Limitations
The OCR transcription is not perfect. It is estimated that the quality is 95% or better.
## Additional Information
### size of dataset
500MB-2GB
### Dataset Curators
This dataset is curated by the national library of Luxembourg (opendata at URL).
### Licensing Information
Creative Commons Public Domain Dedication and Certification
### Contributions
Thanks to @ymaurer for adding this dataset. | [
"# Dataset Card for BnL Newspapers 1841-1879",
"## Table of Contents\n- Dataset Card for bnl_newspapers1841-1879\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - size of dataset\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper:\n- Leaderboard:\n- Point of Contact: opendata at URL",
"### Dataset Summary\n\n630.709 articles from historical newspapers (1841-1879) along with metadata and the full text.\n\n21 newspaper titles\n24.415 newspaper issues\n99.957 scanned pages\nTranscribed using a variety of OCR engines and corrected using URL (95% threshold)\nPublic Domain, CC0 (See copyright notice)\n\nThe newspapers used are:\n- Der Arbeiter (1878)\n- L'Arlequin (1848-1848)\n- L'Avenir (1868-1871)\n- Courrier du Grand-Duché de Luxembourg (1844-1868)\n- Cäcilia (1863-1871)\n- Diekircher Wochenblatt (1841-1848)\n- Le Gratis luxembourgeois (1857-1858)\n- L'Indépendance luxembourgeoise (1871-1879)\n- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1879)\n- La Gazette du Grand-Duché de Luxembourg (1878)\n- Luxemburger Anzeiger (1856)\n- Luxemburger Bauernzeitung (1857)\n- Luxemburger Volks-Freund (1869-1876)\n- Luxemburger Wort (1848-1879)\n- Luxemburger Zeitung (1844-1845)\n- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)\n- L'Union (1860-1871)\n- Das Vaterland (1869-1870)\n- Der Volksfreund (1848-1849)\n- Der Wächter an der Sauer (1849-1869)\n- D'Wäschfra (1868-1879)",
"### Supported Tasks and Leaderboards",
"### Languages\n\nGerman, French, Luxembourgish",
"## Dataset Structure\n\nJSONL file zipped.",
"### Data Instances",
"### Data Fields\n\n- 'identifier' : unique and persistent identifier using ARK for the Article.\n- 'date' : publishing date of the document e.g \"1848-12-15\".\n- 'metsType' : set to \"newspaper\".\n- 'newpaperTitle' : title of the newspaper. It is transcribed as in the masthead of the individual issue and can thus change.\n- 'paperID' : local identifier for the newspaper title. It remains the same, even for short-term title changes.\n- 'publisher' : publisher of the document e.g. \"Verl. der St-Paulus-Druckerei\".\n- 'title' : main title of the article, section, advertisement, etc.\n- 'text' : full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines.\n- 'creator' : author of the article, section, advertisement etc. Most articles do not have an associated author.\n- 'type' : type of the exported data e.g. ARTICLE, SECTION, ADVERTISEMENT, ...",
"## Dataset Creation\n\nThe dataset was created by the National library of Luxembourg with the output of its newspaper digitisation program.",
"### Curation Rationale\n\nThe selection of newspapers represent the current state of digitisation of the Luxembourg legal deposit collection of newspapers that are in the public domain. That means all newspapers printed in Luxembourg before and including 1879.",
"### Source Data\n\nPrinted historical newspapers.",
"#### Initial Data Collection and Normalization\n\nThe data was created through digitisation. The full digitisation specifications are available at URL",
"### Annotations",
"#### Annotation process\n\nDuring the digitisation process, newspaper pages were semi-automatically zoned into articles. This was done by external suppliers to the library according to the digitisation specifications.",
"#### Who are the annotators?\n\nStaff at the external suppliers.",
"### Personal and Sensitive Information\n\nThe dataset contains only data that was published in a newspaper. Since it contains only articles before 1879, no living person is expected to be included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nThe biases in the text represent the biases from newspaper editors and journalists at the time of the publication. In particular during the period from 1940/05/10 to 1944/09/10 the Nazi occupier controlled all information published.",
"### Other Known Limitations\n\nThe OCR transcription is not perfect. It is estimated that the quality is 95% or better.",
"## Additional Information",
"### size of dataset\n\n500MB-2GB",
"### Dataset Curators\n\nThis dataset is curated by the national library of Luxembourg (opendata at URL).",
"### Licensing Information\n\nCreative Commons Public Domain Dedication and Certification",
"### Contributions\n\nThanks to @ymaurer for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-German #language-French #language-Luxembourgish #language-Dutch #language-Latin #language-English #license-cc0-1.0 #newspapers #1800-1900 #lam #region-us \n",
"# Dataset Card for BnL Newspapers 1841-1879",
"## Table of Contents\n- Dataset Card for bnl_newspapers1841-1879\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - size of dataset\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper:\n- Leaderboard:\n- Point of Contact: opendata at URL",
"### Dataset Summary\n\n630.709 articles from historical newspapers (1841-1879) along with metadata and the full text.\n\n21 newspaper titles\n24.415 newspaper issues\n99.957 scanned pages\nTranscribed using a variety of OCR engines and corrected using URL (95% threshold)\nPublic Domain, CC0 (See copyright notice)\n\nThe newspapers used are:\n- Der Arbeiter (1878)\n- L'Arlequin (1848-1848)\n- L'Avenir (1868-1871)\n- Courrier du Grand-Duché de Luxembourg (1844-1868)\n- Cäcilia (1863-1871)\n- Diekircher Wochenblatt (1841-1848)\n- Le Gratis luxembourgeois (1857-1858)\n- L'Indépendance luxembourgeoise (1871-1879)\n- Kirchlicher Anzeiger für die Diözese Luxemburg (1871-1879)\n- La Gazette du Grand-Duché de Luxembourg (1878)\n- Luxemburger Anzeiger (1856)\n- Luxemburger Bauernzeitung (1857)\n- Luxemburger Volks-Freund (1869-1876)\n- Luxemburger Wort (1848-1879)\n- Luxemburger Zeitung (1844-1845)\n- Luxemburger Zeitung = Journal de Luxembourg (1858-1859)\n- L'Union (1860-1871)\n- Das Vaterland (1869-1870)\n- Der Volksfreund (1848-1849)\n- Der Wächter an der Sauer (1849-1869)\n- D'Wäschfra (1868-1879)",
"### Supported Tasks and Leaderboards",
"### Languages\n\nGerman, French, Luxembourgish",
"## Dataset Structure\n\nJSONL file zipped.",
"### Data Instances",
"### Data Fields\n\n- 'identifier' : unique and persistent identifier using ARK for the Article.\n- 'date' : publishing date of the document e.g \"1848-12-15\".\n- 'metsType' : set to \"newspaper\".\n- 'newpaperTitle' : title of the newspaper. It is transcribed as in the masthead of the individual issue and can thus change.\n- 'paperID' : local identifier for the newspaper title. It remains the same, even for short-term title changes.\n- 'publisher' : publisher of the document e.g. \"Verl. der St-Paulus-Druckerei\".\n- 'title' : main title of the article, section, advertisement, etc.\n- 'text' : full text of the entire article, section, advertisement etc. It includes any titles and subtitles as well. The content does not contain layout information, such as headings, paragraphs or lines.\n- 'creator' : author of the article, section, advertisement etc. Most articles do not have an associated author.\n- 'type' : type of the exported data e.g. ARTICLE, SECTION, ADVERTISEMENT, ...",
"## Dataset Creation\n\nThe dataset was created by the National library of Luxembourg with the output of its newspaper digitisation program.",
"### Curation Rationale\n\nThe selection of newspapers represent the current state of digitisation of the Luxembourg legal deposit collection of newspapers that are in the public domain. That means all newspapers printed in Luxembourg before and including 1879.",
"### Source Data\n\nPrinted historical newspapers.",
"#### Initial Data Collection and Normalization\n\nThe data was created through digitisation. The full digitisation specifications are available at URL",
"### Annotations",
"#### Annotation process\n\nDuring the digitisation process, newspaper pages were semi-automatically zoned into articles. This was done by external suppliers to the library according to the digitisation specifications.",
"#### Who are the annotators?\n\nStaff at the external suppliers.",
"### Personal and Sensitive Information\n\nThe dataset contains only data that was published in a newspaper. Since it contains only articles before 1879, no living person is expected to be included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nThe biases in the text represent the biases from newspaper editors and journalists at the time of the publication. In particular during the period from 1940/05/10 to 1944/09/10 the Nazi occupier controlled all information published.",
"### Other Known Limitations\n\nThe OCR transcription is not perfect. It is estimated that the quality is 95% or better.",
"## Additional Information",
"### size of dataset\n\n500MB-2GB",
"### Dataset Curators\n\nThis dataset is curated by the national library of Luxembourg (opendata at URL).",
"### Licensing Information\n\nCreative Commons Public Domain Dedication and Certification",
"### Contributions\n\nThanks to @ymaurer for adding this dataset."
] |
2abeb0ec1afe29f11c420554dde89a03f2037936 | # Dataset Card for "ai4lam-demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/ai4lam-demo | [
"region:us"
] | 2022-11-14T11:46:07+00:00 | {"dataset_info": {"features": [{"name": "record_id", "dtype": "string"}, {"name": "date", "dtype": "timestamp[ns]"}, {"name": "raw_date", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "place", "dtype": "string"}, {"name": "empty_pg", "dtype": "bool"}, {"name": "text", "dtype": "string"}, {"name": "pg", "dtype": "int64"}, {"name": "mean_wc_ocr", "dtype": "float64"}, {"name": "std_wc_ocr", "dtype": "float64"}, {"name": "name", "dtype": "string"}, {"name": "all_names", "dtype": "string"}, {"name": "Publisher", "dtype": "string"}, {"name": "Country of publication 1", "dtype": "string"}, {"name": "all Countries of publication", "dtype": "string"}, {"name": "Physical description", "dtype": "string"}, {"name": "Language_1", "dtype": "string"}, {"name": "Language_2", "dtype": "string"}, {"name": "Language_3", "dtype": "null"}, {"name": "Language_4", "dtype": "null"}, {"name": "multi_language", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 5300866, "num_examples": 4148}], "download_size": 2857751, "dataset_size": 5300866}} | 2022-11-14T11:46:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ai4lam-demo"
More Information needed | [
"# Dataset Card for \"ai4lam-demo\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ai4lam-demo\"\n\nMore Information needed"
] |
45bc9f6cda53745ebdd539d6ed810b66c42165d9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167153 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:52:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:47:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
9bba4fd751b4566df79da6793336964beb507e00 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-d44dbe-2087167154 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T11:58:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:47:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
a0ad81e432b6319cdd49a8f28564f1464692f23f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367155 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:07:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:55:17+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-7b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-7b1\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-7b1\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
25466151aac0b74dd009692a1391eb52ef75fc79 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367156 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:09:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:59:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
6ebfa982030a4cb00f92ecefd2f655de5f376384 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367157 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:09:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:50:40+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b7
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b7\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b7\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
38275c46bc34226523d3e9ce88c94fa0890c5330 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367158 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:24:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:55:53+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
81f2880e9d800f47d4a1f5c428ee5509857e41be | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-74fd83-2087367159 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:38:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:08:09+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-560m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-560m\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
5764cf7f85429f1e92d690faeb7e3e91dc320599 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467160 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:38:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:58:21+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-3b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-3b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
2c473b1bc8367bec6f322ba6e13886b1ff720e1d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467161 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:46:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:50:03+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-7b1\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-7b1\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
0e970267d885a714a51cae1fb47a23e9843c8725 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467163 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:54:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:42:54+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b1\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b1\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
e0501c693987c44b4bc07a2a623409dfd75d10f8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467162 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:54:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:00:19+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b7\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b7\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
b4fea7a9cae9818a6888ffb5d6bee7aea261c2e9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6ca7d2-2087467164 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T12:57:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T13:53:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-560m\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-560m\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
6425559679455cda6d175477a09f29f79150da39 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567166 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:01:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T16:15:40+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-7b1\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-7b1\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
091d92c0662b9f94efe5c878bec7d0d9fc82044f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567165 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:02:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:30:09+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-3b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-3b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
5defd285a8b45b469e45a55cf9f44e2eb674145d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567167 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:06:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:19:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b7\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b7\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
85144f20752c11d2de5a8b9c177c8dd74a725f7f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567168 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:16:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:10:44+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b1\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b1\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
78e0c338882c9e66c4da23e79bfb234a4a47455b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-2bc32ae8-3118-4561-b552-cc3a89a73cd5-1816 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:35:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T13:36:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
0440fc0e9b596f8fd685fc1d8ae401a1edb88586 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767170 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:49:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:34:17+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-3b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-3b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
83fbfa7765df01f861324eefeb0dc9d1368fb173 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-f50546-2087567169 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T13:49:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:39:30+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-560m\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-560m\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
b84205d203d9ffd71582d446d70b061da198fac4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767172 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:01:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:39:34+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b7\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b7\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
ec147a7d3e74acb9e6a3566a6ee23518b00a459b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767171 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:01:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:49:38+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-7b1\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-7b1\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
85725852540d745dc6930f1f530e3c09869101fa | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767173 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:06:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:34:00+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b1\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b1\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
586f5f8c735bf3b169a6ae7825ee7839bf79fb5d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-8ea950-2087767174 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:07:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T14:39:48+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-560m\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-560m\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
28060bdc8ecb737cc611bb83ee8b7106f026b9ec | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-b7ccdeae-8bc5-40c1-85ae-3aef82a8e55e-1917 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:12:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T14:13:17+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
1ac7bd886bb4366607691e745f086352a6ed6786 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867175 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:18:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:08:03+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-3b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-3b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-3b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
55440ebb06a099e48327b072cf6ad10f03d92246 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867176 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:27:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-7b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T16:17:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-7b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-7b1\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-7b1\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
6542676a795bf74578ab344bc6b8c6eae5271515 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867177 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:27:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b7", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:08:14+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b7
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b7\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b7\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
f6c320ac42466cba4f5cc6ca2d683da10b2c5115 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867178 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:38:27+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:08:51+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-1b1
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b1\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-1b1\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
3ef35fbcd887afa52e2356bdbe3fc93343fed5fb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-3e6f1a-2087867179 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T14:42:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloomz-560m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T15:10:30+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloomz-560m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-560m\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloomz-560m\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
3d5d1c7d1719bded4b7a1f96ce77b589a17e801d | # Telegram News (Farsi - Persian)
## Updated 24 OCT 2022
bbc.pickle Total News: 139,275 Timespan: 2015-10-13 - 2022-10-24
fars.pickle Total News: 241,346 Timespan: 2015-09-26 - 2022-10-24
farsivoa.pickle Total News: 134,023 Timespan: 2015-10-07 - 2022-10-24
iranint.pickle Total News: 137,459 Timespan: 2017-05-16 - 2022-10-24
irna.pickle Total News: 178,395 Timespan: 2016-07-05 - 2022-10-24
khabar.pickle Total News: 384,922 Timespan: 2016-09-22 - 2022-10-24
Tabnak.pickle Total News: 102,122 Timespan: 2017-05-22 - 2022-10-24
### Helper functions
```py
def getTxt(msg):
txt=''
if msg.text:
txt+=msg.text+' '
if msg.caption:
txt+=msg.caption+' '
if not msg.web_page==None:
try:
txt+=msg.web_page.title+' '
txt+=msg.web_page.description
except:pass
txt=txt.lower().replace(u'\u200c', '').replace('\n','').replace('📸','').replace('\xa0','')
txt=re.sub(r'http\S+', '', txt)
txt=re.sub(r'[a-z]', '', txt)
txt=re.sub(r'[^\w\s\d]', '', txt)
return txt.strip()
```
```py
def getDocs(m):
txt=getTxt(m)
if len(txt)>10:
return {'text':txt,'date':m.date}
else:
return ['']
```
```py
def getDate(news):
return news. Date
```
### Read the Files
```py
with open('bbc.pickle', 'rb') as handle:
news=pickle.load(handle)
newsText=list(map(getTxt,news))
newsDate=list(map(getDate,news))
```
| qhnprof/Telegram_News | [
"license:afl-3.0",
"region:us"
] | 2022-11-14T14:44:45+00:00 | {"license": "afl-3.0"} | 2022-11-14T15:13:47+00:00 | [] | [] | TAGS
#license-afl-3.0 #region-us
| # Telegram News (Farsi - Persian)
## Updated 24 OCT 2022
URL Total News: 139,275 Timespan: 2015-10-13 - 2022-10-24
URL Total News: 241,346 Timespan: 2015-09-26 - 2022-10-24
URL Total News: 134,023 Timespan: 2015-10-07 - 2022-10-24
URL Total News: 137,459 Timespan: 2017-05-16 - 2022-10-24
URL Total News: 178,395 Timespan: 2016-07-05 - 2022-10-24
URL Total News: 384,922 Timespan: 2016-09-22 - 2022-10-24
URL Total News: 102,122 Timespan: 2017-05-22 - 2022-10-24
### Helper functions
### Read the Files
| [
"# Telegram News (Farsi - Persian)",
"## Updated 24 OCT 2022\n\n URL Total News: 139,275 Timespan: 2015-10-13 - 2022-10-24\n \n URL Total News: 241,346 Timespan: 2015-09-26 - 2022-10-24\n \n URL Total News: 134,023 Timespan: 2015-10-07 - 2022-10-24\n \n URL Total News: 137,459 Timespan: 2017-05-16 - 2022-10-24\n \n URL Total News: 178,395 Timespan: 2016-07-05 - 2022-10-24\n \n URL Total News: 384,922 Timespan: 2016-09-22 - 2022-10-24\n \n URL Total News: 102,122 Timespan: 2017-05-22 - 2022-10-24",
"### Helper functions",
"### Read the Files"
] | [
"TAGS\n#license-afl-3.0 #region-us \n",
"# Telegram News (Farsi - Persian)",
"## Updated 24 OCT 2022\n\n URL Total News: 139,275 Timespan: 2015-10-13 - 2022-10-24\n \n URL Total News: 241,346 Timespan: 2015-09-26 - 2022-10-24\n \n URL Total News: 134,023 Timespan: 2015-10-07 - 2022-10-24\n \n URL Total News: 137,459 Timespan: 2017-05-16 - 2022-10-24\n \n URL Total News: 178,395 Timespan: 2016-07-05 - 2022-10-24\n \n URL Total News: 384,922 Timespan: 2016-09-22 - 2022-10-24\n \n URL Total News: 102,122 Timespan: 2017-05-22 - 2022-10-24",
"### Helper functions",
"### Read the Files"
] |
a08f776e773971ecb42f1efd8a47b9dc1bdd9c36 | # Dataset Card for "legaltokenized1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vegeta/legaltokenized1024 | [
"region:us"
] | 2022-11-14T16:28:38+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 27016370584, "num_examples": 5268403}, {"name": "validation", "num_bytes": 2947948744, "num_examples": 574873}], "download_size": 7022414209, "dataset_size": 29964319328}} | 2022-11-17T12:33:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "legaltokenized1024"
More Information needed | [
"# Dataset Card for \"legaltokenized1024\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"legaltokenized1024\"\n\nMore Information needed"
] |
e862f017cec09267fa4645afa9d010fb1e99408e | # Dataset Card for "mapsnlsloaded"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/mapsnlsloaded | [
"region:us"
] | 2022-11-14T17:06:16+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "no building or railspace", "1": "railspace", "2": "building", "3": "railspace and non railspace building"}}}}, {"name": "map_sheet", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 323743326.376, "num_examples": 12404}, {"name": "train", "num_bytes": 957911247.448, "num_examples": 37212}, {"name": "validation", "num_bytes": 316304202.708, "num_examples": 12404}], "download_size": 1599110547, "dataset_size": 1597958776.5319998}} | 2022-11-14T17:09:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "mapsnlsloaded"
More Information needed | [
"# Dataset Card for \"mapsnlsloaded\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"mapsnlsloaded\"\n\nMore Information needed"
] |
be2e86928be852df4c47cec9708430c143999c33 | # Dataset Card for "legaltokenized256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vegeta/legaltokenized256 | [
"region:us"
] | 2022-11-14T17:30:42+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 27564311544, "num_examples": 21400863}, {"name": "validation", "num_bytes": 3008263104, "num_examples": 2335608}], "download_size": 7092165713, "dataset_size": 30572574648}} | 2022-11-17T11:22:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "legaltokenized256"
More Information needed | [
"# Dataset Card for \"legaltokenized256\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"legaltokenized256\"\n\nMore Information needed"
] |
a99a936bfa227ce73e1175cad73095a1d285ba1e | # Dataset Card for "wmt19-valid-only"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only | [
"region:us"
] | 2022-11-14T18:52:45+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["zh", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 1107522, "num_examples": 3981}], "download_size": 719471, "dataset_size": 1107522}} | 2022-11-14T18:56:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wmt19-valid-only"
More Information needed | [
"# Dataset Card for \"wmt19-valid-only\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wmt19-valid-only\"\n\nMore Information needed"
] |
0d6ed75757797a41d42f318992a2d3ded0dad095 | Your mother
nobody is going to see this probably
I saw | datasciencemmw/current-data | [
"license:openrail",
"doi:10.57967/hf/0155",
"region:us"
] | 2022-11-14T18:57:23+00:00 | {"license": "openrail"} | 2022-12-01T19:08:36+00:00 | [] | [] | TAGS
#license-openrail #doi-10.57967/hf/0155 #region-us
| Your mother
nobody is going to see this probably
I saw | [] | [
"TAGS\n#license-openrail #doi-10.57967/hf/0155 #region-us \n"
] |
cc6682dcd28b7eae76c184b331e590e5bc0202f3 | # Dataset Card for "wmt19-valid-only-de_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only-de_en | [
"region:us"
] | 2022-11-14T18:59:13+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["de", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 757649, "num_examples": 2998}], "download_size": 491141, "dataset_size": 757649}} | 2022-11-14T18:59:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wmt19-valid-only-de_en"
More Information needed | [
"# Dataset Card for \"wmt19-valid-only-de_en\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wmt19-valid-only-de_en\"\n\nMore Information needed"
] |
48654674506bc442da75cc6ddcf20d51a4f17f34 | # Dataset Card for "wmt19-valid-only-zh_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only-zh_en | [
"region:us"
] | 2022-11-14T18:59:22+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["zh", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 1107522, "num_examples": 3981}], "download_size": 719471, "dataset_size": 1107522}} | 2022-11-14T18:59:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wmt19-valid-only-zh_en"
More Information needed | [
"# Dataset Card for \"wmt19-valid-only-zh_en\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wmt19-valid-only-zh_en\"\n\nMore Information needed"
] |
1ef6156f6beccdf1200eee90b7d4afb70da3a8b6 | # Dataset Card for "wmt19-valid-only-gu_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only-gu_en | [
"region:us"
] | 2022-11-14T18:59:33+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["gu", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 774621, "num_examples": 1998}], "download_size": 367288, "dataset_size": 774621}} | 2022-11-14T18:59:37+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wmt19-valid-only-gu_en"
More Information needed | [
"# Dataset Card for \"wmt19-valid-only-gu_en\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wmt19-valid-only-gu_en\"\n\nMore Information needed"
] |
d572eaed743d99e7331c8bd550224d9792b51096 | # Dataset Card for "wmt19-valid-only-ru_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/wmt19-valid-only-ru_en | [
"region:us"
] | 2022-11-14T19:00:56+00:00 | {"dataset_info": {"features": [{"name": "translation", "dtype": {"translation": {"languages": ["ru", "en"]}}}], "splits": [{"name": "validation", "num_bytes": 1085596, "num_examples": 3000}], "download_size": 605574, "dataset_size": 1085596}} | 2022-11-14T19:01:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wmt19-valid-only-ru_en"
More Information needed | [
"# Dataset Card for \"wmt19-valid-only-ru_en\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wmt19-valid-only-ru_en\"\n\nMore Information needed"
] |
770740869211d4ea18ca852c37ed65df706d488f |


Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/).
Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
# Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
If you use this work, please cite:
```bibtex
@inproceedings{MeloSemantic,
author = {Melo, Rui and Santos, Professor Pedro Alexandre and Dias, Professor Jo{\~ a}o},
title = {A {Semantic} {Search} {System} for {Supremo} {Tribunal} de {Justi}{\c c}a},
}
```
| stjiris/portuguese-legal-sentences-v0 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:pt",
"license:apache-2.0",
"region:us"
] | 2022-11-14T21:28:26+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["pt"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"]} | 2023-01-08T14:23:33+00:00 | [] | [
"pt"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Portuguese #license-apache-2.0 #region-us
|
!INESC-ID
!A Semantic Search System for Supremo Tribunal de Justiça
Work developed as part of Project IRIS.
Thesis: A Semantic Search System for Supremo Tribunal de Justiça
# Portuguese Legal Sentences
Collection of Legal Sentences from the Portuguese Supreme Court of Justice
The goal of this dataset was to be used for MLM and TSDAE
### Contributions
@rufimelo99
If you use this work, please cite:
| [
"# Portuguese Legal Sentences\nCollection of Legal Sentences from the Portuguese Supreme Court of Justice\nThe goal of this dataset was to be used for MLM and TSDAE",
"### Contributions\n@rufimelo99\n\n\nIf you use this work, please cite:"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #source_datasets-original #language-Portuguese #license-apache-2.0 #region-us \n",
"# Portuguese Legal Sentences\nCollection of Legal Sentences from the Portuguese Supreme Court of Justice\nThe goal of this dataset was to be used for MLM and TSDAE",
"### Contributions\n@rufimelo99\n\n\nIf you use this work, please cite:"
] |
8475f5ae11e7a4c351adec79c237856b8168d875 | Обработан из 54 гигабайт данных. Удалены имена, не используются ответы больше 100 символов. | Den4ikAI/mailruQA-big | [
"license:mit",
"region:us"
] | 2022-11-14T23:23:53+00:00 | {"license": "mit"} | 2022-11-18T04:08:50+00:00 | [] | [] | TAGS
#license-mit #region-us
| Обработан из 54 гигабайт данных. Удалены имена, не используются ответы больше 100 символов. | [] | [
"TAGS\n#license-mit #region-us \n"
] |
4083a8159b907eaa2c1bb87b1891e14fdf0ad5cf |
## Introduction
Chinese-C4 is a clean Chinese internet dataset based on Common Crawl. The dataset is 46.29GB and has undergone multiple cleaning strategies, including Chinese filtering, heuristic cleaning based on punctuation, line-based hashing for deduplication, and repetition removal.
The dataset is open source and free for commercial use, and you are welcome to use the data and the cleaning strategies provided and contribute your cleaning strategies.
You can find the cleaning script for the dataset on GitHub [c4-dataset-script](https://github.com/shjwudp/c4-dataset-script).
| shjwudp/chinese-c4 | [
"language:zh",
"license:cc-by-4.0",
"region:us"
] | 2022-11-15T01:27:26+00:00 | {"language": ["zh"], "license": "cc-by-4.0"} | 2023-06-20T10:40:06+00:00 | [] | [
"zh"
] | TAGS
#language-Chinese #license-cc-by-4.0 #region-us
|
## Introduction
Chinese-C4 is a clean Chinese internet dataset based on Common Crawl. The dataset is 46.29GB and has undergone multiple cleaning strategies, including Chinese filtering, heuristic cleaning based on punctuation, line-based hashing for deduplication, and repetition removal.
The dataset is open source and free for commercial use, and you are welcome to use the data and the cleaning strategies provided and contribute your cleaning strategies.
You can find the cleaning script for the dataset on GitHub c4-dataset-script.
| [
"## Introduction\n\nChinese-C4 is a clean Chinese internet dataset based on Common Crawl. The dataset is 46.29GB and has undergone multiple cleaning strategies, including Chinese filtering, heuristic cleaning based on punctuation, line-based hashing for deduplication, and repetition removal.\n\nThe dataset is open source and free for commercial use, and you are welcome to use the data and the cleaning strategies provided and contribute your cleaning strategies.\n\nYou can find the cleaning script for the dataset on GitHub c4-dataset-script."
] | [
"TAGS\n#language-Chinese #license-cc-by-4.0 #region-us \n",
"## Introduction\n\nChinese-C4 is a clean Chinese internet dataset based on Common Crawl. The dataset is 46.29GB and has undergone multiple cleaning strategies, including Chinese filtering, heuristic cleaning based on punctuation, line-based hashing for deduplication, and repetition removal.\n\nThe dataset is open source and free for commercial use, and you are welcome to use the data and the cleaning strategies provided and contribute your cleaning strategies.\n\nYou can find the cleaning script for the dataset on GitHub c4-dataset-script."
] |
faa996ad4c4efb058881b75c84b5cc8106376d51 |
annotations_creators:
- machine-generated
language:
- en
language_creators: []
license:
- wtfpl
multilinguality:
- monolingual
pretty_name: OCR-IDL
size_categories:
- 10M<n<100M
source_datasets:
- original
tags:
- pretraining
- documents
- idl
- ''
task_categories: []
task_ids: []
| rubentito/OCR-IDL | [
"license:wtfpl",
"region:us"
] | 2022-11-15T08:14:01+00:00 | {"license": "wtfpl"} | 2022-11-30T08:59:49+00:00 | [] | [] | TAGS
#license-wtfpl #region-us
|
annotations_creators:
- machine-generated
language:
- en
language_creators: []
license:
- wtfpl
multilinguality:
- monolingual
pretty_name: OCR-IDL
size_categories:
- 10M<n<100M
source_datasets:
- original
tags:
- pretraining
- documents
- idl
- ''
task_categories: []
task_ids: []
| [] | [
"TAGS\n#license-wtfpl #region-us \n"
] |
ea77161978d40cecf6371091b6bbbf7ed70b8930 |
# Dataset Card for SI-NLI
### Dataset Summary
SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in the Slovenian reference corpus [ccKres](http://hdl.handle.net/11356/1034). Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998.
Only the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework [SloBENCH](https://slobench.cjvt.si/). If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others.
If you have access to the private test set (with labels), you can load it instead of the public one via `datasets.load_dataset("cjvt/si_nli", "private", data_dir="<...>")`.
### Supported Tasks and Leaderboards
Natural language inference.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
```
{
'pair_id': 'P0',
'premise': 'Vendar se je anglikanska večina v grofijah na severu otoka (Ulster) na plebiscitu odločila, da ostane v okviru Velike Britanije.',
'hypothesis': 'A na glasovanju o priključitvi ozemlja k Severni Irski so se prebivalci ulsterskih grofij, pretežno anglikanske veroizpovedi, izrekli o obstanku pod okriljem VB.',
'annotation1': 'entailment',
'annotator1_id': 'annotator_C',
'annotation2': 'entailment',
'annotator2_id': 'annotator_A',
'annotation3': '',
'annotator3_id': '',
'annotation_final': 'entailment',
'label': 'entailment'
}
```
### Data Fields
- `pair_id`: string identifier of the pair (`""` in the test set),
- `premise`: premise sentence,
- `hypothesis`: hypothesis sentence,
- `annotation1`: the first annotation (`""` if not available),
- `annotator1_id`: anonymized identifier of the first annotator (`""` if not available),
- `annotation2`: the second annotation (`""` if not available),
- `annotator2_id`: anonymized identifier of the second annotator (`""` if not available),
- `annotation3`: the third annotation (`""` if not available),
- `annotator3_id`: anonymized identifier of the third annotator (`""` if not available),
- `annotation_final`: aggregated annotation where it could be unanimously determined (`""` if not available or an unanimous agreement could not be reached),
- `label`: aggregated annotation: either same as `annotation_final` (in case of agreement), same as `annotation1` (in case of disagreement), or `""` (in the test set). **Note that examples with disagreement are all put in the training set**. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement).
\* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.
## Additional Information
### Dataset Curators
Matej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.
### Licensing Information
CC BY-NC-SA 4.0.
### Citation Information
```
@misc{sinli,
title = {Slovene Natural Language Inference Dataset {SI}-{NLI}},
author = {Klemen, Matej and {\v Z}agar, Ale{\v s} and {\v C}ibej, Jaka and Robnik-{\v S}ikonja, Marko},
url = {http://hdl.handle.net/11356/1707},
note = {Slovenian language resource repository {CLARIN}.{SI}},
year = {2022}
}
```
### Contributions
Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset. | cjvt/si_nli | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:natural-language-inference",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:sl",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-15T08:41:29+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found", "expert-generated"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "natural-language-inference"], "pretty_name": "Slovene natural language inference dataset", "tags": [], "dataset_info": [{"config_name": "default", "features": [{"name": "pair_id", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "annotation1", "dtype": "string"}, {"name": "annotator1_id", "dtype": "string"}, {"name": "annotation2", "dtype": "string"}, {"name": "annotator2_id", "dtype": "string"}, {"name": "annotation3", "dtype": "string"}, {"name": "annotator3_id", "dtype": "string"}, {"name": "annotation_final", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1352635, "num_examples": 4392}, {"name": "validation", "num_bytes": 164561, "num_examples": 547}, {"name": "test", "num_bytes": 246518, "num_examples": 998}], "download_size": 410093, "dataset_size": 1763714}, {"config_name": "public", "features": [{"name": "pair_id", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "annotation1", "dtype": "string"}, {"name": "annotator1_id", "dtype": "string"}, {"name": "annotation2", "dtype": "string"}, {"name": "annotator2_id", "dtype": "string"}, {"name": "annotation3", "dtype": "string"}, {"name": "annotator3_id", "dtype": "string"}, {"name": "annotation_final", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1352591, "num_examples": 4392}, {"name": "validation", "num_bytes": 164517, "num_examples": 547}, {"name": "test", "num_bytes": 246474, "num_examples": 998}], "download_size": 410093, "dataset_size": 1763582}, {"config_name": "private", "features": [{"name": "pair_id", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "annotation1", "dtype": "string"}, {"name": "annotator1_id", "dtype": "string"}, {"name": "annotation2", "dtype": "string"}, {"name": "annotator2_id", "dtype": "string"}, {"name": "annotation3", "dtype": "string"}, {"name": "annotator3_id", "dtype": "string"}, {"name": "annotation_final", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train"}, {"name": "validation"}, {"name": "test"}], "download_size": 0, "dataset_size": 0}]} | 2023-04-04T07:51:01+00:00 | [] | [
"sl"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #language-Slovenian #license-cc-by-nc-sa-4.0 #region-us
|
# Dataset Card for SI-NLI
### Dataset Summary
SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in the Slovenian reference corpus ccKres. Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998.
Only the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework SloBENCH. If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others.
If you have access to the private test set (with labels), you can load it instead of the public one via 'datasets.load_dataset("cjvt/si_nli", "private", data_dir="<...>")'.
### Supported Tasks and Leaderboards
Natural language inference.
### Languages
Slovenian.
## Dataset Structure
### Data Instances
A sample instance from the dataset:
### Data Fields
- 'pair_id': string identifier of the pair ('""' in the test set),
- 'premise': premise sentence,
- 'hypothesis': hypothesis sentence,
- 'annotation1': the first annotation ('""' if not available),
- 'annotator1_id': anonymized identifier of the first annotator ('""' if not available),
- 'annotation2': the second annotation ('""' if not available),
- 'annotator2_id': anonymized identifier of the second annotator ('""' if not available),
- 'annotation3': the third annotation ('""' if not available),
- 'annotator3_id': anonymized identifier of the third annotator ('""' if not available),
- 'annotation_final': aggregated annotation where it could be unanimously determined ('""' if not available or an unanimous agreement could not be reached),
- 'label': aggregated annotation: either same as 'annotation_final' (in case of agreement), same as 'annotation1' (in case of disagreement), or '""' (in the test set). Note that examples with disagreement are all put in the training set. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement).
\* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.
## Additional Information
### Dataset Curators
Matej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.
### Licensing Information
CC BY-NC-SA 4.0.
### Contributions
Thanks to @matejklemen for adding this dataset. | [
"# Dataset Card for SI-NLI",
"### Dataset Summary\n\nSI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels \"entailment\", \"contradiction\", and \"neutral\". We created the dataset using sentences that appear in the Slovenian reference corpus ccKres. Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998. \n\nOnly the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework SloBENCH. If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others. \n\nIf you have access to the private test set (with labels), you can load it instead of the public one via 'datasets.load_dataset(\"cjvt/si_nli\", \"private\", data_dir=\"<...>\")'.",
"### Supported Tasks and Leaderboards\n\nNatural language inference.",
"### Languages\n\nSlovenian.",
"## Dataset Structure",
"### Data Instances\n\nA sample instance from the dataset:",
"### Data Fields\n\n- 'pair_id': string identifier of the pair ('\"\"' in the test set), \n- 'premise': premise sentence, \n- 'hypothesis': hypothesis sentence, \n- 'annotation1': the first annotation ('\"\"' if not available), \n- 'annotator1_id': anonymized identifier of the first annotator ('\"\"' if not available), \n- 'annotation2': the second annotation ('\"\"' if not available), \n- 'annotator2_id': anonymized identifier of the second annotator ('\"\"' if not available), \n- 'annotation3': the third annotation ('\"\"' if not available), \n- 'annotator3_id': anonymized identifier of the third annotator ('\"\"' if not available), \n- 'annotation_final': aggregated annotation where it could be unanimously determined ('\"\"' if not available or an unanimous agreement could not be reached), \n- 'label': aggregated annotation: either same as 'annotation_final' (in case of agreement), same as 'annotation1' (in case of disagreement), or '\"\"' (in the test set). Note that examples with disagreement are all put in the training set. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement). \n\n\\* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.",
"## Additional Information",
"### Dataset Curators\n\nMatej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.",
"### Licensing Information\n\nCC BY-NC-SA 4.0.",
"### Contributions\n\nThanks to @matejklemen for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-natural-language-inference #annotations_creators-expert-generated #language_creators-found #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #language-Slovenian #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for SI-NLI",
"### Dataset Summary\n\nSI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels \"entailment\", \"contradiction\", and \"neutral\". We created the dataset using sentences that appear in the Slovenian reference corpus ccKres. Annotators were tasked to modify the hypothesis in a candidate pair in a way that reflects one of the labels. The dataset is balanced since the annotators created three modifications (entailment, contradiction, neutral) for each candidate sentence pair. The dataset is split into train, validation, and test sets, with sizes of 4,392, 547, and 998. \n\nOnly the hypothesis and premise are given in the test set (i.e. no annotations) since SI-NLI is integrated into the Slovene evaluation framework SloBENCH. If you use the dataset to train your models, please consider submitting the test set predictions to SloBENCH to get the evaluation score and see how it compares to others. \n\nIf you have access to the private test set (with labels), you can load it instead of the public one via 'datasets.load_dataset(\"cjvt/si_nli\", \"private\", data_dir=\"<...>\")'.",
"### Supported Tasks and Leaderboards\n\nNatural language inference.",
"### Languages\n\nSlovenian.",
"## Dataset Structure",
"### Data Instances\n\nA sample instance from the dataset:",
"### Data Fields\n\n- 'pair_id': string identifier of the pair ('\"\"' in the test set), \n- 'premise': premise sentence, \n- 'hypothesis': hypothesis sentence, \n- 'annotation1': the first annotation ('\"\"' if not available), \n- 'annotator1_id': anonymized identifier of the first annotator ('\"\"' if not available), \n- 'annotation2': the second annotation ('\"\"' if not available), \n- 'annotator2_id': anonymized identifier of the second annotator ('\"\"' if not available), \n- 'annotation3': the third annotation ('\"\"' if not available), \n- 'annotator3_id': anonymized identifier of the third annotator ('\"\"' if not available), \n- 'annotation_final': aggregated annotation where it could be unanimously determined ('\"\"' if not available or an unanimous agreement could not be reached), \n- 'label': aggregated annotation: either same as 'annotation_final' (in case of agreement), same as 'annotation1' (in case of disagreement), or '\"\"' (in the test set). Note that examples with disagreement are all put in the training set. This aggregation is just the most simple possibility and the user may instead do something more advanced based on the individual annotations (e.g., learning with disagreement). \n\n\\* A small number of examples did not go through the annotation process because they were constructed by the authors when writing the guidelines. The quality of these was therefore checked by the authors. Such examples do not have the individual annotations and the annotator IDs.",
"## Additional Information",
"### Dataset Curators\n\nMatej Klemen, Aleš Žagar, Jaka Čibej, Marko Robnik-Šikonja.",
"### Licensing Information\n\nCC BY-NC-SA 4.0.",
"### Contributions\n\nThanks to @matejklemen for adding this dataset."
] |
e9df778e49a78115fd77c91f9c64c5d0f925ac2d | # Dataset Card for "test_push_two_configs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push_two_configs | [
"region:us"
] | 2022-11-15T10:35:42+00:00 | {"dataset_info": [{"config_name": "v1", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46, "num_examples": 3}, {"name": "test", "num_bytes": 32, "num_examples": 2}], "download_size": 1674, "dataset_size": 78}, {"config_name": "v2", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 60, "num_examples": 4}, {"name": "test", "num_bytes": 18, "num_examples": 1}], "download_size": 1671, "dataset_size": 78}]} | 2022-11-21T13:11:15+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_push_two_configs"
More Information needed | [
"# Dataset Card for \"test_push_two_configs\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_push_two_configs\"\n\nMore Information needed"
] |
d6d7882743b7c7275ac1830a2a37ba17c7d7114a |
# Gold standards and outputs
## Dataset Description
- MapReader’s GitHub: https://github.com/Living-with-machines/MapReader
- MapReader paper: https://dl.acm.org/doi/10.1145/3557919.3565812
- Zenodo link for gold standards and outputs: https://doi.org/10.5281/zenodo.7147906
- Contacts: Katherine McDonough, The Alan Turing Institute, kmcdonough at turing.ac.uk; Kasra Hosseini, The Alan Turing Institute, k.hosseinizad at gmail.com
### Dataset Summary
Here we share gold standard annotations and outputs from early experiments using MapReader. MapReader creates datasets for humanities research using historical map scans and metadata as inputs.
Using maps provided by the National Library of Scotland, these annotations and outputs reflect labeling tasks relevant to historical research on the [Living with Machines](https://livingwithmachines.ac.uk/) project.
Data shared here is derived from maps printed in nineteenth-century Britain by the Ordnance Survey, Britain's state mapping agency. These maps cover England, Wales, and Scotland from 1888 to 1913.
## Directory structure
The gold standards and outputs are stored on [Zenodo](https://doi.org/10.5281/zenodo.7147906). It contains the following directories/files:
```
MapReader_Data_SIGSPATIAL_2022
├── README
├── annotations
│ ├── maps
│ │ ├── map_100942121.png
│ │ ├── ...
│ │ └── map_99383316.png
│ ├── slice_meters_100_100
│ │ ├── test
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ ├── train
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ └── val
│ │ ├── patch-...PNG
│ │ ├── ...
│ │ └── patch-...PNG
│ ├── test.csv
│ ├── train.csv
│ └── valid.csv
└── outputs
├── label_01_03
│ ├── pred_01_03_all.csv
│ ├── pred_01_03_keep_01_0250.csv
│ ├── pred_01_03_keep_05_0500.csv
│ └── pred_01_03_keep_10_1000.csv
├── label_02
│ ├── pred_02_all.csv
│ ├── pred_02_keep_01_0250.csv
│ ├── pred_02_keep_05_0500.csv
│ └── pred_02_keep_10_1000.csv
├── patches_all.csv
├── percentage
│ └── pred_02_keep_1_250_01_03_keep_1_250_percentage.csv
└── resources
├── StopsGB4paper.csv
└── six_inch4paper.json
```
## annotations
The `annotations` directory is as follows:
```
├── annotations
│ ├── maps
│ │ ├── map_100942121.png
│ │ ├── ...
│ │ └── map_99383316.png
│ ├── slice_meters_100_100
│ │ ├── test
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ ├── train
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ └── val
│ │ ├── patch-...PNG
│ │ ├── ...
│ │ └── patch-...PNG
│ ├── test.csv
│ ├── train.csv
│ └── valid.csv
```
### annotations/train.csv, valid.csv and test.csv
In the `MapReader_Data_SIGSPATIAL_2022/annotations` directory, there are three CSV files, namely `train.csv`, `valid.csv` and `test.csv`. These files have two columns:
```
image_id,label
slice_meters_100_100/train/patch-1390-3892-1529-4031-#map_101590193.png#.PNG,0
slice_meters_100_100/train/patch-1716-3960-1848-4092-#map_101439245.png#.PNG,0
...
```
in which:
- `image_id`: path to each labelled patch. For example in `slice_meters_100_100/train/patch-1390-3892-1529-4031-#map_101590193.png#.PNG`:
- `slice_meters_100_100/train`: directory where the patch is stored. (in this example, it is a patch used for training)
- `patch-1390-3892-1529-4031-#map_101590193.png#.PNG` has two parts itself: `patch-1390-3892-1529-4031` is the patch ID, and the patch itself is extracted from `map_101590193.png` map sheet.
- `label`: label assigned to each patch by an annotator.
- Labels: 0: no [building or railspace]; 1: railspace; 2: building; and 3: railspace and [non railspace] building.
### annotations/slice_meters_100_100
Patches used for training, validation, and test in PNG format.
```
├── annotations
│ ├── slice_meters_100_100
│ │ ├── test
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ ├── train
│ │ │ ├── patch-...PNG
│ │ │ ├── ...
│ │ │ └── patch-...PNG
│ │ └── val
│ │ ├── patch-...PNG
│ │ ├── ...
│ │ └── patch-...PNG
```
### annotations/maps
Map sheets retrieved from the National Library of Scotland via webservers. These maps were later sliced into patches which can be found in `annotations/slice_meters_100_100`.
```
├── annotations
│ ├── maps
│ │ ├── map_100942121.png
│ │ ├── ...
│ │ └── map_99383316.png
```
## outputs
The `outputs` directory is as follows:
```
└── outputs
├── label_01_03
│ ├── pred_01_03_all.csv
│ ├── pred_01_03_keep_01_0250.csv
│ ├── pred_01_03_keep_05_0500.csv
│ └── pred_01_03_keep_10_1000.csv
├── label_02
│ ├── pred_02_all.csv
│ ├── pred_02_keep_01_0250.csv
│ ├── pred_02_keep_05_0500.csv
│ └── pred_02_keep_10_1000.csv
├── patches_all.csv
├── percentage
│ └── pred_02_keep_1_250_01_03_keep_1_250_percentage.csv
└── resources
├── StopsGB4paper.csv
└── six_inch4paper.json
```
### outputs/label_01_03
Starting with:
```
└── outputs
├── label_01_03
│ ├── pred_01_03_all.csv
│ ├── pred_01_03_keep_01_0250.csv
│ ├── pred_01_03_keep_05_0500.csv
│ └── pred_01_03_keep_10_1000.csv
```
The file `pred_01_03_all.csv` contains the following columns:
```
,center_lon,center_lat,pred,conf,mean_pixel_RGB,std_pixel_RGB,mean_pixel_A,image_id,parent_id,pub_date,url,x,y,z,opening_year_quicks,closing_year_quicks,dist2quicks
0,-0.4011055106547341,52.61260776720805,1,0.9898980855941772,0.8450341820716858,0.1668068021535873,1.0,patch-3014-0-3151-137-#map_100890251.png#.PNG,map_100890251.png,1902,https://maps.nls.uk/view/100890251,3880925.8529841416,-27169.29919979412,5044483.051365171,1867,1929,1121.9150481268305
1,-0.399645312864389,52.61260776720805,1,0.9999995231628418,0.823089599609375,0.1925655305385589,1.0,patch-3151-0-3288-137-#map_100890251.png#.PNG,map_100890251.png,1902,https://maps.nls.uk/view/100890251,3880926.544140446,-27070.392789791513,5044483.051365171,1867,1929,1113.0714735200893
...
```
- **center_lon**: longitude of the patch center
- **center_lat**: latitude of the patch center
- **pred**: predicted label for the patch
- **conf**: model confidence
- **mean_pixel_RGB**: mean pixel intensities, using all three channels
- **std_pixel_RGB**: standard deviations of pixel intensities, using all three channels
- **mean_pixel_A**: mean pixel intensities of alpha channel
- **image_id**: patch ID
- **parent_id**: ID of the map sheet that the patch belongs to
- **pub_date**: publication date of the map sheet that the patch belongs to
- **url**: URL of the map sheet that the patch belongs to
- **x, y, z**: to compute distances (using k-d tree)
- **opening_year_quicks**: Date when the railway station first opened
- **closing_year_quicks**: Date when the railway station last closed,
- **dist2quicks**: distance to the closest StopsGB in meters.
NB: See `outputs/resources` below for description of the StopsGB (railway station) data and links to related publications.
---
The other files in `outputs/label_01_03` have the same columns as `pred_01_03_all.csv` (described above). The difference is:
- `pred_01_03_all.csv`: all patches predicted as labels 1 (railspace) or 3 (railspace and [non railspace] building).
- `pred_01_03_keep_01_0250.csv`: similar to `pred_01_03_all.csv` except that we removed those patches that had no other neighboring patches with the same label within a radius of 250 meters. Note 01 and 0250 in the name. 01 means one neighboring patch and 0250 means 250 meters.
- `pred_01_03_keep_05_0500.csv`: similar to `pred_01_03_all.csv` except that we removed those patches that had less than five neighboring patches with the same label within a radius of 500 meters.
- `pred_01_03_keep_10_1000.csv`: similar to `pred_01_03_all.csv` except that we removed those patches that had less than ten neighboring patches with the same label within a radius of 1000 meters.
### outputs/label_02
Next, these files:
```
├── label_02
│ ├── pred_02_all.csv
│ ├── pred_02_keep_01_0250.csv
│ ├── pred_02_keep_05_0500.csv
│ └── pred_02_keep_10_1000.csv
```
Are the same as the files described above for `label_01_03` except for label 02 (i.e., building).
### outputs/patches_all.csv
And last:
```
└── outputs
├── patches_all.csv
```
The file `patches_all.csv` has the following columns:
⚠️ this file contains the results for 30,490,411 patches used in the MapReader paper.
```
center_lat,center_lon,pred
52.61260776720805,-0.4332298620423274,0
52.61260776720805,-0.4317696642519822,0
...
```
in which:
- **center_lon**: longitude of the patch center
- **center_lat**: latitude of the patch center
- **pred**: predicted label for the patch
### outputs/percentage
We have added one file in `outputs/percentage`:
```
└── outputs
├── percentage
│ └── pred_02_keep_1_250_01_03_keep_1_250_percentage.csv
```
This file has the following columns:
```
,center_lon,center_lat,pred,conf,mean_pixel_RGB,std_pixel_RGB,mean_pixel_A,image_id,parent_id,pub_date,url,x,y,z,dist2rail,dist2quicks,dist2quicks_km,dist2rail_km,dist2rail_minus_station,dist2quicks_km_quantized,dist2rail_km_quantized,dist2rail_minus_station_quantized,perc_neigh_rails,perc_neigh_builds,harmonic_mean_rail_build
0,-0.4040259062354244,52.61260776720805,2,0.9999010562896729,0.8095282316207886,0.1955385357141494,1.0,patch-2740-0-2877-137-#map_100890251.png#.PNG,map_100890251.png,1902,https://maps.nls.uk/view/100890251,3880924.4631095687,-27367.11196679585,5044483.051365171,197.8176497186437,1164.8640633870857,1.1648640633870857,0.1978176497186437,0.9670464136684418,1.0,0.0,0.5,7.198443579766536,4.669260700389105,5.664349046373668
1,-0.4054861040257695,52.61171342293056,2,0.9999876022338868,0.8741853833198547,0.1160899400711059,1.0,patch-2603-137-2740-274-#map_100890251.png#.PNG,map_100890251.png,1902,https://maps.nls.uk/view/100890251,3881002.836728637,-27466.57793328472,5044422.621073416,296.73252022623865,1290.9640259717814,1.2909640259717814,0.2967325202262386,0.9942315057455428,1.0,0.0,0.5,7.050092764378478,4.452690166975881,5.45813633371237
...
```
in which:
- **center_lon**: longitude of the patch center
- **center_lat**: latitude of the patch center
- **pred**: predicted label for the patch
- **conf**: model confidence
- **mean_pixel_RGB**: mean pixel intensities, using all three channels
- **std_pixel_RGB**: standard deviations of pixel intensities, using all three channels
- **mean_pixel_A**: mean pixel intensities of alpha channel
- **image_id**: patch ID
- **parent_id**: ID of the map sheet that the patch belongs to
- **pub_date**: publication date of the map sheet that the patch belongs to
- **url**: URL of the map sheet that the patch belongs to
- **x, y, z**: to compute distances (using k-d tree)
- **dist2rail**: distance to the closest railspace patch (i.e., the patch that is classified as 1: railspace or 3: railspace and [non railspace] building)
- **dist2quicks**: distance to the closest StopsGB station in meters.
- **dist2quicks_km**: distance to the closest StopsGB station in km.
- **dist2rail_km**: similar to **dist2rail** except in km.
- **dist2rail_minus_station**: | dist2rail_km - dist2quicks_km |
- **dist2quicks_km_quantized**: discrete version of **dist2quicks_km**, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).
- **dist2rail_km_quantized**: discrete version of **dist2rail_km**, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).
- **dist2rail_minus_station_quantized**: discrete version of **dist2rail_minus_station**, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).
- **perc_neigh_rails**: what is the percentage of neighboring patches predicted as rail (labels 01 and 03).
- **perc_neigh_builds**: what is the percentage of neighboring patches predicted as building (label 02).
- **harmonic_mean_rail_build**: Harmonic mean of *perc_neigh_rails* and **perc_neigh_builds**.
These additional `percentage` attributes shed light on the relationship between 'railspace' and stations, something we explore in further Living with Machines research.
### outputs/resources
Finally, we have the following files:
```
└── outputs
└── resources
├── StopsGB4paper.csv
└── six_inch4paper.json
```
- `StopsGB4paper.csv`: this is a trimmed down version of StopsGB, a dataset documenting passenger railway stations in Great Britain (see [this link](https://bl.iro.bl.uk/concern/datasets/0abea1b1-2a43-4422-ba84-39b354c8bb09?locale=en) for the complete dataset). We filtered the stations as follows:
- Keep only stations for which "ghost_entry" and "cross_ref" columns are "False". (These two fields help remove records in the StopsGB dataset that are not actually stations, but relics of the original publication formatting.)
- "Opening" was NOT "unknown".
- The map sheet was surveyed during a year when the station was operational (i.e., "opening_year_quicks" <= survey_date_of_map_sheet <= "closing_year_quicks").
You can learn more about the StopsGB dataset and how it was created from this paper:
```
Mariona Coll Ardanuy, Kaspar Beelen, Jon Lawrence, Katherine McDonough, Federico Nanni, Joshua Rhodes, Giorgia Tolfo, and Daniel C.S. Wilson. "Station to Station: Linking and Enriching Historical British Railway Data." In Computational Humanities Research (CHR2021). 2021.
```
```bibtex
@inproceedings{lwm-station-to-station-2021,
title = "Station to Station: Linking and Enriching Historical British Railway Data",
author = "Coll Ardanuy, Mariona and
Beelen, Kaspar and
Lawrence, Jon and
McDonough, Katherine and
Nanni, Federico and
Rhodes, Joshua and
Tolfo, Giorgia and
Wilson, Daniel CS",
booktitle = "Computational Humanities Research",
year = "2021",
}
```
- `six_inch4paper.json`: similar to [metadata_OS_Six_Inch_GB_WFS_light.json](https://github.com/Living-with-machines/MapReader/blob/main/mapreader/persistent_data/metadata_OS_Six_Inch_GB_WFS_light.json) on MapReader's GitHub with some minor changes.
## Dataset Creation
### Curation Rationale
These annotations of map patches are part of a research project to develop humanistic methods for structuring visual information on digitized historical maps. Dividing thousands of nineteenth-century map sheets into 100m x 100m patches and labeling those patches with historically-meaningful concepts diverges from traditional methods for creating data from maps, both in terms of scale (the number of maps being examined), and of type (raster-style patches vs. pixel-level vector data). For more on the rationale for this approach, see the following paper:
```
Kasra Hosseini, Katherine McDonough, Daniel van Strien, Olivia Vane, Daniel C S Wilson, Maps of a Nation? The Digitized Ordnance Survey for New Historical Research, *Journal of Victorian Culture*, Volume 26, Issue 2, April 2021, Pages 284–299.
```
```bibtex
@article{hosseini_maps_2021,
title = {Maps of a Nation? The Digitized Ordnance Survey for New Historical Research},
volume = {26},
rights = {All rights reserved},
issn = {1355-5502},
url = {https://doi.org/10.1093/jvcult/vcab009},
doi = {10.1093/jvcult/vcab009},
shorttitle = {Maps of a Nation?},
pages = {284--299},
number = {2},
journaltitle = {Journal of Victorian Culture},
author = {Hosseini, Kasra and {McDonough}, Katherine and van Strien, Daniel and Vane, Olivia and Wilson, Daniel C S},
urldate = {2021-05-19},
date = {2021-04-01},
}
```
### Source Data
#### Initial Data Access
Data was accessed via the National Library of Scotland's Historical Maps API: https://maps.nls.uk/projects/subscription-api/
The data shared here is derived from the six-inch to one mile sheets printed between 1888-1913: https://maps.nls.uk/projects/subscription-api/#gb6inch
### Annotations and Outputs
The annotations and output datasets collected here are related to experiments to identify the 'footprint' of rail infrastructure in the UK, a concept we call 'railspace'. We also created a dataset to identify buildings on the maps.
#### Annotation process
The custom annotation interface built into MapReader is designed specifically to assist researchers in labeling patches relevant to concepts of interest to their research questions.
Our **guidelines** for the data shared here were:
- for any non-null label (railspace, building, or railspace + building), if a patch contains any visual signal for that label (e.g. 'railspace'), it should be assigned the relevant label. For example, if it is possible for an annotator to see a railway track passing through the corner of a patch, that patch is labeled as 'railspace'.
- the context around the patch should not be used as an aid in extreme cases where it is nearly impossible to determine whether a patch contains a non-null label
- however, the patch context shown in the annotation interface can be used to quickly distinguish between different content types, particularly where the contiguity of a type across patches is useful in determining what label to assign
- for 'railspace': use this label for any type of rail infrastructure as determined by expert labelers. This includes, for example, single-track mining railroads; larger double-track passenger routes; sidings and embankments; etc. It excludes urban trams.
- for 'building': use this label for any size building
- for 'building + railspace': use this label for patches combining these two types of content
Because 'none' (e.g. null) patches made up the vast majority of patches in the total dataset from these map sheets, we ordered patches to annotate based on their pixel intensity. This allowed us to focus first on patches containing more visual content printed on the map sheet, and later to move more quickly through the patches that captured parts of the map with little to no printed features.
#### Who are the annotators?
Data shared here was annotated by Kasra Hosseini and Katherine McDonough.
Members of the Living with Machines research team contributed early annotations during the development of MapReader: Ruth Ahnert, Kaspar Beelen, Mariona Coll-Ardanuy, Emma Griffin, Tim Hobson, Jon Lawrence, Giorgia Tolfo, Daniel van Strien, Olivia Vane, and Daniel C.S. Wilson.
## Credits and re-use terms
### MapReader outputs
The files shared here (other than ```resources```) under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-nc-sa/4.0/) (CC-BY-NC-SA) licence.
If you are interested in working with OS maps used to create these results, please also note the re-use terms of the original map images and metadata detailed below.
### Digitized maps
MapReader can retrieve maps from NLS (National Library of Scotland) via webservers. For all the digitized maps (retrieved or locally stored), please note the re-use terms:
Use of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-nc-sa/4.0/) (CC-BY-NC-SA) licence. Please refer to https://maps.nls.uk/copyright.html#exceptions-os for details on copyright and re-use license.
### Map metadata
We have provided some metadata files in on MapReader’s GitHub page (https://github.com/Living-with-machines/MapReader/tree/main/mapreader/persistent_data). For all these file, please note the re-use terms:
Use of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (https://creativecommons.org/licenses/by-nc-sa/4.0/) (CC-BY-NC-SA) licence. Please refer to https://maps.nls.uk/copyright.html#exceptions-os for details on copyright and re-use license.
## Acknowledgements
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
Living with Machines, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and the Universities of Cambridge, East Anglia, Exeter, and Queen Mary University of London. | Livingwithmachines/MapReader_Data_SIGSPATIAL_2022 | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-sa-4.0",
"maps",
"historical",
"National Library of Scotland",
"heritage",
"humanities",
"lam",
"region:us"
] | 2022-11-15T11:16:13+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "MapReader Data SIGSPATIAL 2022", "tags": ["maps", "historical", "National Library of Scotland", "heritage", "humanities", "lam"]} | 2023-05-11T21:38:38+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #size_categories-10K<n<100K #language-English #license-cc-by-nc-sa-4.0 #maps #historical #National Library of Scotland #heritage #humanities #lam #region-us
|
# Gold standards and outputs
## Dataset Description
- MapReader’s GitHub: URL
- MapReader paper: URL
- Zenodo link for gold standards and outputs: URL
- Contacts: Katherine McDonough, The Alan Turing Institute, kmcdonough at URL; Kasra Hosseini, The Alan Turing Institute, k.hosseinizad at URL
### Dataset Summary
Here we share gold standard annotations and outputs from early experiments using MapReader. MapReader creates datasets for humanities research using historical map scans and metadata as inputs.
Using maps provided by the National Library of Scotland, these annotations and outputs reflect labeling tasks relevant to historical research on the Living with Machines project.
Data shared here is derived from maps printed in nineteenth-century Britain by the Ordnance Survey, Britain's state mapping agency. These maps cover England, Wales, and Scotland from 1888 to 1913.
## Directory structure
The gold standards and outputs are stored on Zenodo. It contains the following directories/files:
## annotations
The 'annotations' directory is as follows:
### annotations/URL, URL and URL
In the 'MapReader_Data_SIGSPATIAL_2022/annotations' directory, there are three CSV files, namely 'URL', 'URL' and 'URL'. These files have two columns:
in which:
- 'image_id': path to each labelled patch. For example in 'slice_meters_100_100/train/patch-1390-3892-1529-4031-#map_101590193.png#.PNG':
- 'slice_meters_100_100/train': directory where the patch is stored. (in this example, it is a patch used for training)
- 'patch-1390-3892-1529-4031-#map_101590193.png#.PNG' has two parts itself: 'patch-1390-3892-1529-4031' is the patch ID, and the patch itself is extracted from 'map_101590193.png' map sheet.
- 'label': label assigned to each patch by an annotator.
- Labels: 0: no [building or railspace]; 1: railspace; 2: building; and 3: railspace and [non railspace] building.
### annotations/slice_meters_100_100
Patches used for training, validation, and test in PNG format.
### annotations/maps
Map sheets retrieved from the National Library of Scotland via webservers. These maps were later sliced into patches which can be found in 'annotations/slice_meters_100_100'.
## outputs
The 'outputs' directory is as follows:
### outputs/label_01_03
Starting with:
The file 'pred_01_03_all.csv' contains the following columns:
- center_lon: longitude of the patch center
- center_lat: latitude of the patch center
- pred: predicted label for the patch
- conf: model confidence
- mean_pixel_RGB: mean pixel intensities, using all three channels
- std_pixel_RGB: standard deviations of pixel intensities, using all three channels
- mean_pixel_A: mean pixel intensities of alpha channel
- image_id: patch ID
- parent_id: ID of the map sheet that the patch belongs to
- pub_date: publication date of the map sheet that the patch belongs to
- url: URL of the map sheet that the patch belongs to
- x, y, z: to compute distances (using k-d tree)
- opening_year_quicks: Date when the railway station first opened
- closing_year_quicks: Date when the railway station last closed,
- dist2quicks: distance to the closest StopsGB in meters.
NB: See 'outputs/resources' below for description of the StopsGB (railway station) data and links to related publications.
---
The other files in 'outputs/label_01_03' have the same columns as 'pred_01_03_all.csv' (described above). The difference is:
- 'pred_01_03_all.csv': all patches predicted as labels 1 (railspace) or 3 (railspace and [non railspace] building).
- 'pred_01_03_keep_01_0250.csv': similar to 'pred_01_03_all.csv' except that we removed those patches that had no other neighboring patches with the same label within a radius of 250 meters. Note 01 and 0250 in the name. 01 means one neighboring patch and 0250 means 250 meters.
- 'pred_01_03_keep_05_0500.csv': similar to 'pred_01_03_all.csv' except that we removed those patches that had less than five neighboring patches with the same label within a radius of 500 meters.
- 'pred_01_03_keep_10_1000.csv': similar to 'pred_01_03_all.csv' except that we removed those patches that had less than ten neighboring patches with the same label within a radius of 1000 meters.
### outputs/label_02
Next, these files:
Are the same as the files described above for 'label_01_03' except for label 02 (i.e., building).
### outputs/patches_all.csv
And last:
The file 'patches_all.csv' has the following columns:
️ this file contains the results for 30,490,411 patches used in the MapReader paper.
in which:
- center_lon: longitude of the patch center
- center_lat: latitude of the patch center
- pred: predicted label for the patch
### outputs/percentage
We have added one file in 'outputs/percentage':
This file has the following columns:
in which:
- center_lon: longitude of the patch center
- center_lat: latitude of the patch center
- pred: predicted label for the patch
- conf: model confidence
- mean_pixel_RGB: mean pixel intensities, using all three channels
- std_pixel_RGB: standard deviations of pixel intensities, using all three channels
- mean_pixel_A: mean pixel intensities of alpha channel
- image_id: patch ID
- parent_id: ID of the map sheet that the patch belongs to
- pub_date: publication date of the map sheet that the patch belongs to
- url: URL of the map sheet that the patch belongs to
- x, y, z: to compute distances (using k-d tree)
- dist2rail: distance to the closest railspace patch (i.e., the patch that is classified as 1: railspace or 3: railspace and [non railspace] building)
- dist2quicks: distance to the closest StopsGB station in meters.
- dist2quicks_km: distance to the closest StopsGB station in km.
- dist2rail_km: similar to dist2rail except in km.
- dist2rail_minus_station: | dist2rail_km - dist2quicks_km |
- dist2quicks_km_quantized: discrete version of dist2quicks_km, we used these intervals: 0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).
- dist2rail_km_quantized: discrete version of dist2rail_km, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).
- dist2rail_minus_station_quantized: discrete version of dist2rail_minus_station, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).
- perc_neigh_rails: what is the percentage of neighboring patches predicted as rail (labels 01 and 03).
- perc_neigh_builds: what is the percentage of neighboring patches predicted as building (label 02).
- harmonic_mean_rail_build: Harmonic mean of *perc_neigh_rails* and perc_neigh_builds.
These additional 'percentage' attributes shed light on the relationship between 'railspace' and stations, something we explore in further Living with Machines research.
### outputs/resources
Finally, we have the following files:
- 'URL': this is a trimmed down version of StopsGB, a dataset documenting passenger railway stations in Great Britain (see [this link for the complete dataset). We filtered the stations as follows:
- Keep only stations for which "ghost_entry" and "cross_ref" columns are "False". (These two fields help remove records in the StopsGB dataset that are not actually stations, but relics of the original publication formatting.)
- "Opening" was NOT "unknown".
- The map sheet was surveyed during a year when the station was operational (i.e., "opening_year_quicks" <= survey_date_of_map_sheet <= "closing_year_quicks").
You can learn more about the StopsGB dataset and how it was created from this paper:
- 'six_inch4paper.json': similar to metadata_OS_Six_Inch_GB_WFS_light.json on MapReader's GitHub with some minor changes.
## Dataset Creation
### Curation Rationale
These annotations of map patches are part of a research project to develop humanistic methods for structuring visual information on digitized historical maps. Dividing thousands of nineteenth-century map sheets into 100m x 100m patches and labeling those patches with historically-meaningful concepts diverges from traditional methods for creating data from maps, both in terms of scale (the number of maps being examined), and of type (raster-style patches vs. pixel-level vector data). For more on the rationale for this approach, see the following paper:
### Source Data
#### Initial Data Access
Data was accessed via the National Library of Scotland's Historical Maps API: URL
The data shared here is derived from the six-inch to one mile sheets printed between 1888-1913: URL
### Annotations and Outputs
The annotations and output datasets collected here are related to experiments to identify the 'footprint' of rail infrastructure in the UK, a concept we call 'railspace'. We also created a dataset to identify buildings on the maps.
#### Annotation process
The custom annotation interface built into MapReader is designed specifically to assist researchers in labeling patches relevant to concepts of interest to their research questions.
Our guidelines for the data shared here were:
- for any non-null label (railspace, building, or railspace + building), if a patch contains any visual signal for that label (e.g. 'railspace'), it should be assigned the relevant label. For example, if it is possible for an annotator to see a railway track passing through the corner of a patch, that patch is labeled as 'railspace'.
- the context around the patch should not be used as an aid in extreme cases where it is nearly impossible to determine whether a patch contains a non-null label
- however, the patch context shown in the annotation interface can be used to quickly distinguish between different content types, particularly where the contiguity of a type across patches is useful in determining what label to assign
- for 'railspace': use this label for any type of rail infrastructure as determined by expert labelers. This includes, for example, single-track mining railroads; larger double-track passenger routes; sidings and embankments; etc. It excludes urban trams.
- for 'building': use this label for any size building
- for 'building + railspace': use this label for patches combining these two types of content
Because 'none' (e.g. null) patches made up the vast majority of patches in the total dataset from these map sheets, we ordered patches to annotate based on their pixel intensity. This allowed us to focus first on patches containing more visual content printed on the map sheet, and later to move more quickly through the patches that captured parts of the map with little to no printed features.
#### Who are the annotators?
Data shared here was annotated by Kasra Hosseini and Katherine McDonough.
Members of the Living with Machines research team contributed early annotations during the development of MapReader: Ruth Ahnert, Kaspar Beelen, Mariona Coll-Ardanuy, Emma Griffin, Tim Hobson, Jon Lawrence, Giorgia Tolfo, Daniel van Strien, Olivia Vane, and Daniel C.S. Wilson.
## Credits and re-use terms
### MapReader outputs
The files shared here (other than ) under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (URL (CC-BY-NC-SA) licence.
If you are interested in working with OS maps used to create these results, please also note the re-use terms of the original map images and metadata detailed below.
### Digitized maps
MapReader can retrieve maps from NLS (National Library of Scotland) via webservers. For all the digitized maps (retrieved or locally stored), please note the re-use terms:
Use of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (URL (CC-BY-NC-SA) licence. Please refer to URL for details on copyright and re-use license.
### Map metadata
We have provided some metadata files in on MapReader’s GitHub page (URL For all these file, please note the re-use terms:
Use of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (URL (CC-BY-NC-SA) licence. Please refer to URL for details on copyright and re-use license.
## Acknowledgements
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1).
Living with Machines, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and the Universities of Cambridge, East Anglia, Exeter, and Queen Mary University of London. | [
"# Gold standards and outputs",
"## Dataset Description\n\n- MapReader’s GitHub: URL \n- MapReader paper: URL\n- Zenodo link for gold standards and outputs: URL\n- Contacts: Katherine McDonough, The Alan Turing Institute, kmcdonough at URL; Kasra Hosseini, The Alan Turing Institute, k.hosseinizad at URL",
"### Dataset Summary\n\nHere we share gold standard annotations and outputs from early experiments using MapReader. MapReader creates datasets for humanities research using historical map scans and metadata as inputs. \n\nUsing maps provided by the National Library of Scotland, these annotations and outputs reflect labeling tasks relevant to historical research on the Living with Machines project.\n\nData shared here is derived from maps printed in nineteenth-century Britain by the Ordnance Survey, Britain's state mapping agency. These maps cover England, Wales, and Scotland from 1888 to 1913.",
"## Directory structure\n\nThe gold standards and outputs are stored on Zenodo. It contains the following directories/files:",
"## annotations\n\nThe 'annotations' directory is as follows:",
"### annotations/URL, URL and URL\n\nIn the 'MapReader_Data_SIGSPATIAL_2022/annotations' directory, there are three CSV files, namely 'URL', 'URL' and 'URL'. These files have two columns:\n\n\n\nin which:\n\n- 'image_id': path to each labelled patch. For example in 'slice_meters_100_100/train/patch-1390-3892-1529-4031-#map_101590193.png#.PNG':\n - 'slice_meters_100_100/train': directory where the patch is stored. (in this example, it is a patch used for training)\n - 'patch-1390-3892-1529-4031-#map_101590193.png#.PNG' has two parts itself: 'patch-1390-3892-1529-4031' is the patch ID, and the patch itself is extracted from 'map_101590193.png' map sheet.\n- 'label': label assigned to each patch by an annotator.\n - Labels: 0: no [building or railspace]; 1: railspace; 2: building; and 3: railspace and [non railspace] building.",
"### annotations/slice_meters_100_100\n\nPatches used for training, validation, and test in PNG format.",
"### annotations/maps\n\nMap sheets retrieved from the National Library of Scotland via webservers. These maps were later sliced into patches which can be found in 'annotations/slice_meters_100_100'.",
"## outputs\n\nThe 'outputs' directory is as follows:",
"### outputs/label_01_03\n\nStarting with:\n\n\n\nThe file 'pred_01_03_all.csv' contains the following columns:\n\n\n\n- center_lon: longitude of the patch center\n- center_lat: latitude of the patch center\n- pred: predicted label for the patch\n- conf: model confidence\n- mean_pixel_RGB: mean pixel intensities, using all three channels\n- std_pixel_RGB: standard deviations of pixel intensities, using all three channels\n- mean_pixel_A: mean pixel intensities of alpha channel\n- image_id: patch ID\n- parent_id: ID of the map sheet that the patch belongs to\n- pub_date: publication date of the map sheet that the patch belongs to\n- url: URL of the map sheet that the patch belongs to\n- x, y, z: to compute distances (using k-d tree)\n- opening_year_quicks: Date when the railway station first opened\n- closing_year_quicks: Date when the railway station last closed,\n- dist2quicks: distance to the closest StopsGB in meters.\n\nNB: See 'outputs/resources' below for description of the StopsGB (railway station) data and links to related publications.\n\n---\n\nThe other files in 'outputs/label_01_03' have the same columns as 'pred_01_03_all.csv' (described above). The difference is:\n\n- 'pred_01_03_all.csv': all patches predicted as labels 1 (railspace) or 3 (railspace and [non railspace] building).\n- 'pred_01_03_keep_01_0250.csv': similar to 'pred_01_03_all.csv' except that we removed those patches that had no other neighboring patches with the same label within a radius of 250 meters. Note 01 and 0250 in the name. 01 means one neighboring patch and 0250 means 250 meters.\n- 'pred_01_03_keep_05_0500.csv': similar to 'pred_01_03_all.csv' except that we removed those patches that had less than five neighboring patches with the same label within a radius of 500 meters.\n- 'pred_01_03_keep_10_1000.csv': similar to 'pred_01_03_all.csv' except that we removed those patches that had less than ten neighboring patches with the same label within a radius of 1000 meters.",
"### outputs/label_02\n\nNext, these files:\n\n\n\nAre the same as the files described above for 'label_01_03' except for label 02 (i.e., building).",
"### outputs/patches_all.csv\n\nAnd last:\n\n\n\nThe file 'patches_all.csv' has the following columns:\n\n️ this file contains the results for 30,490,411 patches used in the MapReader paper.\n\n\n\nin which:\n\n- center_lon: longitude of the patch center\n- center_lat: latitude of the patch center\n- pred: predicted label for the patch",
"### outputs/percentage\n\nWe have added one file in 'outputs/percentage':\n\n\n\nThis file has the following columns:\n\n\n\nin which:\n\n- center_lon: longitude of the patch center\n- center_lat: latitude of the patch center\n- pred: predicted label for the patch\n- conf: model confidence\n- mean_pixel_RGB: mean pixel intensities, using all three channels\n- std_pixel_RGB: standard deviations of pixel intensities, using all three channels\n- mean_pixel_A: mean pixel intensities of alpha channel\n- image_id: patch ID\n- parent_id: ID of the map sheet that the patch belongs to\n- pub_date: publication date of the map sheet that the patch belongs to\n- url: URL of the map sheet that the patch belongs to\n- x, y, z: to compute distances (using k-d tree)\n- dist2rail: distance to the closest railspace patch (i.e., the patch that is classified as 1: railspace or 3: railspace and [non railspace] building)\n- dist2quicks: distance to the closest StopsGB station in meters.\n- dist2quicks_km: distance to the closest StopsGB station in km.\n- dist2rail_km: similar to dist2rail except in km.\n- dist2rail_minus_station: | dist2rail_km - dist2quicks_km |\n- dist2quicks_km_quantized: discrete version of dist2quicks_km, we used these intervals: 0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).\n- dist2rail_km_quantized: discrete version of dist2rail_km, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).\n- dist2rail_minus_station_quantized: discrete version of dist2rail_minus_station, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).\n- perc_neigh_rails: what is the percentage of neighboring patches predicted as rail (labels 01 and 03).\n- perc_neigh_builds: what is the percentage of neighboring patches predicted as building (label 02).\n- harmonic_mean_rail_build: Harmonic mean of *perc_neigh_rails* and perc_neigh_builds.\n\nThese additional 'percentage' attributes shed light on the relationship between 'railspace' and stations, something we explore in further Living with Machines research.",
"### outputs/resources\n\nFinally, we have the following files:\n\n\n\n- 'URL': this is a trimmed down version of StopsGB, a dataset documenting passenger railway stations in Great Britain (see [this link for the complete dataset). We filtered the stations as follows:\n - Keep only stations for which \"ghost_entry\" and \"cross_ref\" columns are \"False\". (These two fields help remove records in the StopsGB dataset that are not actually stations, but relics of the original publication formatting.)\n - \"Opening\" was NOT \"unknown\".\n - The map sheet was surveyed during a year when the station was operational (i.e., \"opening_year_quicks\" <= survey_date_of_map_sheet <= \"closing_year_quicks\").\n\nYou can learn more about the StopsGB dataset and how it was created from this paper:\n\n\n\n\n\n- 'six_inch4paper.json': similar to metadata_OS_Six_Inch_GB_WFS_light.json on MapReader's GitHub with some minor changes.",
"## Dataset Creation",
"### Curation Rationale\n\nThese annotations of map patches are part of a research project to develop humanistic methods for structuring visual information on digitized historical maps. Dividing thousands of nineteenth-century map sheets into 100m x 100m patches and labeling those patches with historically-meaningful concepts diverges from traditional methods for creating data from maps, both in terms of scale (the number of maps being examined), and of type (raster-style patches vs. pixel-level vector data). For more on the rationale for this approach, see the following paper:",
"### Source Data",
"#### Initial Data Access\n\nData was accessed via the National Library of Scotland's Historical Maps API: URL\n\nThe data shared here is derived from the six-inch to one mile sheets printed between 1888-1913: URL",
"### Annotations and Outputs\n\nThe annotations and output datasets collected here are related to experiments to identify the 'footprint' of rail infrastructure in the UK, a concept we call 'railspace'. We also created a dataset to identify buildings on the maps.",
"#### Annotation process\n\nThe custom annotation interface built into MapReader is designed specifically to assist researchers in labeling patches relevant to concepts of interest to their research questions. \n\nOur guidelines for the data shared here were:\n- for any non-null label (railspace, building, or railspace + building), if a patch contains any visual signal for that label (e.g. 'railspace'), it should be assigned the relevant label. For example, if it is possible for an annotator to see a railway track passing through the corner of a patch, that patch is labeled as 'railspace'.\n- the context around the patch should not be used as an aid in extreme cases where it is nearly impossible to determine whether a patch contains a non-null label\n- however, the patch context shown in the annotation interface can be used to quickly distinguish between different content types, particularly where the contiguity of a type across patches is useful in determining what label to assign\n- for 'railspace': use this label for any type of rail infrastructure as determined by expert labelers. This includes, for example, single-track mining railroads; larger double-track passenger routes; sidings and embankments; etc. It excludes urban trams.\n- for 'building': use this label for any size building\n- for 'building + railspace': use this label for patches combining these two types of content\n\nBecause 'none' (e.g. null) patches made up the vast majority of patches in the total dataset from these map sheets, we ordered patches to annotate based on their pixel intensity. This allowed us to focus first on patches containing more visual content printed on the map sheet, and later to move more quickly through the patches that captured parts of the map with little to no printed features.",
"#### Who are the annotators?\n\nData shared here was annotated by Kasra Hosseini and Katherine McDonough.\n\nMembers of the Living with Machines research team contributed early annotations during the development of MapReader: Ruth Ahnert, Kaspar Beelen, Mariona Coll-Ardanuy, Emma Griffin, Tim Hobson, Jon Lawrence, Giorgia Tolfo, Daniel van Strien, Olivia Vane, and Daniel C.S. Wilson.",
"## Credits and re-use terms",
"### MapReader outputs\n\nThe files shared here (other than ) under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (URL (CC-BY-NC-SA) licence. \n\nIf you are interested in working with OS maps used to create these results, please also note the re-use terms of the original map images and metadata detailed below.",
"### Digitized maps\n\nMapReader can retrieve maps from NLS (National Library of Scotland) via webservers. For all the digitized maps (retrieved or locally stored), please note the re-use terms:\n\nUse of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (URL (CC-BY-NC-SA) licence. Please refer to URL for details on copyright and re-use license.",
"### Map metadata\n\nWe have provided some metadata files in on MapReader’s GitHub page (URL For all these file, please note the re-use terms:\n\nUse of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (URL (CC-BY-NC-SA) licence. Please refer to URL for details on copyright and re-use license.",
"## Acknowledgements\n\nThis work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). \nLiving with Machines, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and the Universities of Cambridge, East Anglia, Exeter, and Queen Mary University of London."
] | [
"TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #size_categories-10K<n<100K #language-English #license-cc-by-nc-sa-4.0 #maps #historical #National Library of Scotland #heritage #humanities #lam #region-us \n",
"# Gold standards and outputs",
"## Dataset Description\n\n- MapReader’s GitHub: URL \n- MapReader paper: URL\n- Zenodo link for gold standards and outputs: URL\n- Contacts: Katherine McDonough, The Alan Turing Institute, kmcdonough at URL; Kasra Hosseini, The Alan Turing Institute, k.hosseinizad at URL",
"### Dataset Summary\n\nHere we share gold standard annotations and outputs from early experiments using MapReader. MapReader creates datasets for humanities research using historical map scans and metadata as inputs. \n\nUsing maps provided by the National Library of Scotland, these annotations and outputs reflect labeling tasks relevant to historical research on the Living with Machines project.\n\nData shared here is derived from maps printed in nineteenth-century Britain by the Ordnance Survey, Britain's state mapping agency. These maps cover England, Wales, and Scotland from 1888 to 1913.",
"## Directory structure\n\nThe gold standards and outputs are stored on Zenodo. It contains the following directories/files:",
"## annotations\n\nThe 'annotations' directory is as follows:",
"### annotations/URL, URL and URL\n\nIn the 'MapReader_Data_SIGSPATIAL_2022/annotations' directory, there are three CSV files, namely 'URL', 'URL' and 'URL'. These files have two columns:\n\n\n\nin which:\n\n- 'image_id': path to each labelled patch. For example in 'slice_meters_100_100/train/patch-1390-3892-1529-4031-#map_101590193.png#.PNG':\n - 'slice_meters_100_100/train': directory where the patch is stored. (in this example, it is a patch used for training)\n - 'patch-1390-3892-1529-4031-#map_101590193.png#.PNG' has two parts itself: 'patch-1390-3892-1529-4031' is the patch ID, and the patch itself is extracted from 'map_101590193.png' map sheet.\n- 'label': label assigned to each patch by an annotator.\n - Labels: 0: no [building or railspace]; 1: railspace; 2: building; and 3: railspace and [non railspace] building.",
"### annotations/slice_meters_100_100\n\nPatches used for training, validation, and test in PNG format.",
"### annotations/maps\n\nMap sheets retrieved from the National Library of Scotland via webservers. These maps were later sliced into patches which can be found in 'annotations/slice_meters_100_100'.",
"## outputs\n\nThe 'outputs' directory is as follows:",
"### outputs/label_01_03\n\nStarting with:\n\n\n\nThe file 'pred_01_03_all.csv' contains the following columns:\n\n\n\n- center_lon: longitude of the patch center\n- center_lat: latitude of the patch center\n- pred: predicted label for the patch\n- conf: model confidence\n- mean_pixel_RGB: mean pixel intensities, using all three channels\n- std_pixel_RGB: standard deviations of pixel intensities, using all three channels\n- mean_pixel_A: mean pixel intensities of alpha channel\n- image_id: patch ID\n- parent_id: ID of the map sheet that the patch belongs to\n- pub_date: publication date of the map sheet that the patch belongs to\n- url: URL of the map sheet that the patch belongs to\n- x, y, z: to compute distances (using k-d tree)\n- opening_year_quicks: Date when the railway station first opened\n- closing_year_quicks: Date when the railway station last closed,\n- dist2quicks: distance to the closest StopsGB in meters.\n\nNB: See 'outputs/resources' below for description of the StopsGB (railway station) data and links to related publications.\n\n---\n\nThe other files in 'outputs/label_01_03' have the same columns as 'pred_01_03_all.csv' (described above). The difference is:\n\n- 'pred_01_03_all.csv': all patches predicted as labels 1 (railspace) or 3 (railspace and [non railspace] building).\n- 'pred_01_03_keep_01_0250.csv': similar to 'pred_01_03_all.csv' except that we removed those patches that had no other neighboring patches with the same label within a radius of 250 meters. Note 01 and 0250 in the name. 01 means one neighboring patch and 0250 means 250 meters.\n- 'pred_01_03_keep_05_0500.csv': similar to 'pred_01_03_all.csv' except that we removed those patches that had less than five neighboring patches with the same label within a radius of 500 meters.\n- 'pred_01_03_keep_10_1000.csv': similar to 'pred_01_03_all.csv' except that we removed those patches that had less than ten neighboring patches with the same label within a radius of 1000 meters.",
"### outputs/label_02\n\nNext, these files:\n\n\n\nAre the same as the files described above for 'label_01_03' except for label 02 (i.e., building).",
"### outputs/patches_all.csv\n\nAnd last:\n\n\n\nThe file 'patches_all.csv' has the following columns:\n\n️ this file contains the results for 30,490,411 patches used in the MapReader paper.\n\n\n\nin which:\n\n- center_lon: longitude of the patch center\n- center_lat: latitude of the patch center\n- pred: predicted label for the patch",
"### outputs/percentage\n\nWe have added one file in 'outputs/percentage':\n\n\n\nThis file has the following columns:\n\n\n\nin which:\n\n- center_lon: longitude of the patch center\n- center_lat: latitude of the patch center\n- pred: predicted label for the patch\n- conf: model confidence\n- mean_pixel_RGB: mean pixel intensities, using all three channels\n- std_pixel_RGB: standard deviations of pixel intensities, using all three channels\n- mean_pixel_A: mean pixel intensities of alpha channel\n- image_id: patch ID\n- parent_id: ID of the map sheet that the patch belongs to\n- pub_date: publication date of the map sheet that the patch belongs to\n- url: URL of the map sheet that the patch belongs to\n- x, y, z: to compute distances (using k-d tree)\n- dist2rail: distance to the closest railspace patch (i.e., the patch that is classified as 1: railspace or 3: railspace and [non railspace] building)\n- dist2quicks: distance to the closest StopsGB station in meters.\n- dist2quicks_km: distance to the closest StopsGB station in km.\n- dist2rail_km: similar to dist2rail except in km.\n- dist2rail_minus_station: | dist2rail_km - dist2quicks_km |\n- dist2quicks_km_quantized: discrete version of dist2quicks_km, we used these intervals: 0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).\n- dist2rail_km_quantized: discrete version of dist2rail_km, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).\n- dist2rail_minus_station_quantized: discrete version of dist2rail_minus_station, we used these intervals: [0. , 0.5), [0.5, 1.), [1., 1.5), ... , [4.5, 5.) and [5., inf).\n- perc_neigh_rails: what is the percentage of neighboring patches predicted as rail (labels 01 and 03).\n- perc_neigh_builds: what is the percentage of neighboring patches predicted as building (label 02).\n- harmonic_mean_rail_build: Harmonic mean of *perc_neigh_rails* and perc_neigh_builds.\n\nThese additional 'percentage' attributes shed light on the relationship between 'railspace' and stations, something we explore in further Living with Machines research.",
"### outputs/resources\n\nFinally, we have the following files:\n\n\n\n- 'URL': this is a trimmed down version of StopsGB, a dataset documenting passenger railway stations in Great Britain (see [this link for the complete dataset). We filtered the stations as follows:\n - Keep only stations for which \"ghost_entry\" and \"cross_ref\" columns are \"False\". (These two fields help remove records in the StopsGB dataset that are not actually stations, but relics of the original publication formatting.)\n - \"Opening\" was NOT \"unknown\".\n - The map sheet was surveyed during a year when the station was operational (i.e., \"opening_year_quicks\" <= survey_date_of_map_sheet <= \"closing_year_quicks\").\n\nYou can learn more about the StopsGB dataset and how it was created from this paper:\n\n\n\n\n\n- 'six_inch4paper.json': similar to metadata_OS_Six_Inch_GB_WFS_light.json on MapReader's GitHub with some minor changes.",
"## Dataset Creation",
"### Curation Rationale\n\nThese annotations of map patches are part of a research project to develop humanistic methods for structuring visual information on digitized historical maps. Dividing thousands of nineteenth-century map sheets into 100m x 100m patches and labeling those patches with historically-meaningful concepts diverges from traditional methods for creating data from maps, both in terms of scale (the number of maps being examined), and of type (raster-style patches vs. pixel-level vector data). For more on the rationale for this approach, see the following paper:",
"### Source Data",
"#### Initial Data Access\n\nData was accessed via the National Library of Scotland's Historical Maps API: URL\n\nThe data shared here is derived from the six-inch to one mile sheets printed between 1888-1913: URL",
"### Annotations and Outputs\n\nThe annotations and output datasets collected here are related to experiments to identify the 'footprint' of rail infrastructure in the UK, a concept we call 'railspace'. We also created a dataset to identify buildings on the maps.",
"#### Annotation process\n\nThe custom annotation interface built into MapReader is designed specifically to assist researchers in labeling patches relevant to concepts of interest to their research questions. \n\nOur guidelines for the data shared here were:\n- for any non-null label (railspace, building, or railspace + building), if a patch contains any visual signal for that label (e.g. 'railspace'), it should be assigned the relevant label. For example, if it is possible for an annotator to see a railway track passing through the corner of a patch, that patch is labeled as 'railspace'.\n- the context around the patch should not be used as an aid in extreme cases where it is nearly impossible to determine whether a patch contains a non-null label\n- however, the patch context shown in the annotation interface can be used to quickly distinguish between different content types, particularly where the contiguity of a type across patches is useful in determining what label to assign\n- for 'railspace': use this label for any type of rail infrastructure as determined by expert labelers. This includes, for example, single-track mining railroads; larger double-track passenger routes; sidings and embankments; etc. It excludes urban trams.\n- for 'building': use this label for any size building\n- for 'building + railspace': use this label for patches combining these two types of content\n\nBecause 'none' (e.g. null) patches made up the vast majority of patches in the total dataset from these map sheets, we ordered patches to annotate based on their pixel intensity. This allowed us to focus first on patches containing more visual content printed on the map sheet, and later to move more quickly through the patches that captured parts of the map with little to no printed features.",
"#### Who are the annotators?\n\nData shared here was annotated by Kasra Hosseini and Katherine McDonough.\n\nMembers of the Living with Machines research team contributed early annotations during the development of MapReader: Ruth Ahnert, Kaspar Beelen, Mariona Coll-Ardanuy, Emma Griffin, Tim Hobson, Jon Lawrence, Giorgia Tolfo, Daniel van Strien, Olivia Vane, and Daniel C.S. Wilson.",
"## Credits and re-use terms",
"### MapReader outputs\n\nThe files shared here (other than ) under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (URL (CC-BY-NC-SA) licence. \n\nIf you are interested in working with OS maps used to create these results, please also note the re-use terms of the original map images and metadata detailed below.",
"### Digitized maps\n\nMapReader can retrieve maps from NLS (National Library of Scotland) via webservers. For all the digitized maps (retrieved or locally stored), please note the re-use terms:\n\nUse of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (URL (CC-BY-NC-SA) licence. Please refer to URL for details on copyright and re-use license.",
"### Map metadata\n\nWe have provided some metadata files in on MapReader’s GitHub page (URL For all these file, please note the re-use terms:\n\nUse of the digitised maps for commercial purposes is currently restricted by contract. Use of these digitised maps for non-commercial purposes is permitted under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (URL (CC-BY-NC-SA) licence. Please refer to URL for details on copyright and re-use license.",
"## Acknowledgements\n\nThis work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). \nLiving with Machines, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and the Universities of Cambridge, East Anglia, Exeter, and Queen Mary University of London."
] |
f91ecb5b914361b69950c84c24431a18cb0f454e | # Dataset Card for "malayalam-news-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | egorulz/malayalam-news-ds | [
"region:us"
] | 2022-11-15T12:09:49+00:00 | {"dataset_info": {"features": [{"name": "news", "dtype": "string"}, {"name": "news_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2014.76925, "num_examples": 9}, {"name": "validation", "num_bytes": 447.7265, "num_examples": 2}], "download_size": 16029, "dataset_size": 2462.49575}} | 2022-11-15T12:10:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "malayalam-news-ds"
More Information needed | [
"# Dataset Card for \"malayalam-news-ds\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"malayalam-news-ds\"\n\nMore Information needed"
] |
85646b678c1b6f9e09c151f13f33e849d1975432 | # artistas_brasileiros
| fredguth/artistas_brasileiros | [
"region:us"
] | 2022-11-15T13:28:34+00:00 | {} | 2022-11-15T14:52:47+00:00 | [] | [] | TAGS
#region-us
| # artistas_brasileiros
| [
"# artistas_brasileiros"
] | [
"TAGS\n#region-us \n",
"# artistas_brasileiros"
] |
68b7f6608e203b50bbd0a0098a5f47e777b21f3f | # Dataset Card for "RickAndMorty-HorizontalMirror-blip-captions" | Norod78/RickAndMorty-HorizontalMirror-blip-captions | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-15T14:31:28+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "pretty_name": "Rick and Morty, Horizontal Mirror, BLIP captions", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 161499799.0, "num_examples": 530}], "download_size": 161488169, "dataset_size": 161499799.0}, "tags": []} | 2022-11-15T14:38:40+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-nc-sa-4.0 #region-us
| # Dataset Card for "RickAndMorty-HorizontalMirror-blip-captions" | [
"# Dataset Card for \"RickAndMorty-HorizontalMirror-blip-captions\""
] | [
"TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for \"RickAndMorty-HorizontalMirror-blip-captions\""
] |
a01f186ccc6708648d90ac0f8c3ef0eb63723030 | # Dataset Card for IPC classification of French patents
## Dataset Description
- **Homepage:**
- **Repository:** [IPC Classification of French Patents](https://github.com/ZoeYou/Patent-Classification-2022)
- **Paper:** [Patent Classification using Extreme Multi-label Learning: A Case Study of French Patents](https://hal.science/hal-03850405v1)
- **Point of Contact:** [You Zuo]([email protected])
### Dataset Summary
INPI-CLS is a French Patents corpus extracted from the internal database of the INPI (National Institute of Industrial Property of France). It was initially designed for the patent classification task and consists of approximately 296k patent texts (including title, abstract, claims, and description) published between 2002 and 2021. Each patent in the corpus is annotated with labels ranging from sections to the IPC subgroup levels.
### Languages
French
### Domain
Patents (intellectual property).
### Social Impact of Dataset
The purpose of this dataset is to help develop models that enable the classification of French patents in the [International Patent Classification (IPC)](https://www.wipo.int/classifications/ipc/en/) system standard.
Thanks to the high integrity of the data, the INPI-CLS corpus can be utilized for various analytical studies concerning French language patents. Moreover, it serves as a valuable resource as a scientific corpus that comprehensively documents the technological inventions of the country.
### Citation Information
```
@inproceedings{zuo:hal-03850405,
TITLE = {{Patent Classification using Extreme Multi-label Learning: A Case Study of French Patents}},
AUTHOR = {Zuo, You and Mouzoun, Houda and Ghamri Doudane, Samir and Gerdes, Kim and Sagot, Beno{\^i}t},
URL = {https://hal.archives-ouvertes.fr/hal-03850405},
BOOKTITLE = {{SIGIR 2022 - PatentSemTech workshop}},
ADDRESS = {Madrid, Spain},
YEAR = {2022},
MONTH = Jul,
KEYWORDS = {IPC prediction ; Clustering and Classification ; Extreme Multi-label Learning ; French ; Patent},
PDF = {https://hal.archives-ouvertes.fr/hal-03850405/file/PatentSemTech_2022___extended_abstract.pdf},
HAL_ID = {hal-03850405},
HAL_VERSION = {v1},
}
```
| ZoeYou/INPI-CLS | [
"multilinguality:monolingual",
"language:fr",
"license:cc-by-nc-sa-3.0",
"region:us"
] | 2022-11-15T14:43:49+00:00 | {"language": ["fr"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification, multi-label-classification"]} | 2023-06-09T11:27:09+00:00 | [] | [
"fr"
] | TAGS
#multilinguality-monolingual #language-French #license-cc-by-nc-sa-3.0 #region-us
| # Dataset Card for IPC classification of French patents
## Dataset Description
- Homepage:
- Repository: IPC Classification of French Patents
- Paper: Patent Classification using Extreme Multi-label Learning: A Case Study of French Patents
- Point of Contact: You Zuo
### Dataset Summary
INPI-CLS is a French Patents corpus extracted from the internal database of the INPI (National Institute of Industrial Property of France). It was initially designed for the patent classification task and consists of approximately 296k patent texts (including title, abstract, claims, and description) published between 2002 and 2021. Each patent in the corpus is annotated with labels ranging from sections to the IPC subgroup levels.
### Languages
French
### Domain
Patents (intellectual property).
### Social Impact of Dataset
The purpose of this dataset is to help develop models that enable the classification of French patents in the International Patent Classification (IPC) system standard.
Thanks to the high integrity of the data, the INPI-CLS corpus can be utilized for various analytical studies concerning French language patents. Moreover, it serves as a valuable resource as a scientific corpus that comprehensively documents the technological inventions of the country.
| [
"# Dataset Card for IPC classification of French patents",
"## Dataset Description\n\n- Homepage:\n- Repository: IPC Classification of French Patents\n- Paper: Patent Classification using Extreme Multi-label Learning: A Case Study of French Patents\n- Point of Contact: You Zuo",
"### Dataset Summary\n\nINPI-CLS is a French Patents corpus extracted from the internal database of the INPI (National Institute of Industrial Property of France). It was initially designed for the patent classification task and consists of approximately 296k patent texts (including title, abstract, claims, and description) published between 2002 and 2021. Each patent in the corpus is annotated with labels ranging from sections to the IPC subgroup levels.",
"### Languages\n\nFrench",
"### Domain\n\nPatents (intellectual property).",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop models that enable the classification of French patents in the International Patent Classification (IPC) system standard.\n\nThanks to the high integrity of the data, the INPI-CLS corpus can be utilized for various analytical studies concerning French language patents. Moreover, it serves as a valuable resource as a scientific corpus that comprehensively documents the technological inventions of the country."
] | [
"TAGS\n#multilinguality-monolingual #language-French #license-cc-by-nc-sa-3.0 #region-us \n",
"# Dataset Card for IPC classification of French patents",
"## Dataset Description\n\n- Homepage:\n- Repository: IPC Classification of French Patents\n- Paper: Patent Classification using Extreme Multi-label Learning: A Case Study of French Patents\n- Point of Contact: You Zuo",
"### Dataset Summary\n\nINPI-CLS is a French Patents corpus extracted from the internal database of the INPI (National Institute of Industrial Property of France). It was initially designed for the patent classification task and consists of approximately 296k patent texts (including title, abstract, claims, and description) published between 2002 and 2021. Each patent in the corpus is annotated with labels ranging from sections to the IPC subgroup levels.",
"### Languages\n\nFrench",
"### Domain\n\nPatents (intellectual property).",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop models that enable the classification of French patents in the International Patent Classification (IPC) system standard.\n\nThanks to the high integrity of the data, the INPI-CLS corpus can be utilized for various analytical studies concerning French language patents. Moreover, it serves as a valuable resource as a scientific corpus that comprehensively documents the technological inventions of the country."
] |
df7c609529686d3e3c0d0b83f00a80345ae412bf | # Dataset Card for "ai4lam-demo2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | davanstrien/ai4lam-demo2 | [
"region:us"
] | 2022-11-15T16:45:10+00:00 | {"dataset_info": {"features": [{"name": "metadata_text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Low_Quality", "1": "High_Quality"}}}}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29309108, "num_examples": 100821}], "download_size": 16023375, "dataset_size": 29309108}} | 2022-11-15T16:45:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ai4lam-demo2"
More Information needed | [
"# Dataset Card for \"ai4lam-demo2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ai4lam-demo2\"\n\nMore Information needed"
] |
70ba1db10cdac67c212cd433963132b879e295f2 | # Dataset Card for "natural_language"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Priyash/natural_language | [
"region:us"
] | 2022-11-15T17:00:08+00:00 | {"dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "Length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4742.1, "num_examples": 9}, {"name": "validation", "num_bytes": 1154, "num_examples": 1}], "download_size": 0, "dataset_size": 5896.1}} | 2022-11-18T17:33:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "natural_language"
More Information needed | [
"# Dataset Card for \"natural_language\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"natural_language\"\n\nMore Information needed"
] |
2f071cebd6a3b6b48a2e76c5b4b6c1bde49d95ee | # Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | transformer-001/github-issues | [
"region:us"
] | 2022-11-15T17:46:15+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "creator", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "open_issues", "dtype": "int64"}, {"name": "closed_issues", "dtype": "int64"}, {"name": "state", "dtype": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "due_on", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}]}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "merged_at", "dtype": "timestamp[s]"}]}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 18908112, "num_examples": 5000}], "download_size": 5112946, "dataset_size": 18908112}} | 2022-11-15T17:46:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "github-issues"
More Information needed | [
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
bb1fff2db16bd92b2b658a9d37a720c720d8844b | # Dataset Card for "testtyt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | helliun/testtyt | [
"region:us"
] | 2022-11-15T18:00:38+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "categories", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "segments", "list": [{"name": "end", "dtype": "float64"}, {"name": "start", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 2138, "num_examples": 1}], "download_size": 11227, "dataset_size": 2138}} | 2022-11-15T18:00:42+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "testtyt"
More Information needed | [
"# Dataset Card for \"testtyt\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"testtyt\"\n\nMore Information needed"
] |
43d716dc64f9ede73658c2a57c66de81ca7afe95 | # Dataset Card for "test_whisper_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/test_whisper_test | [
"region:us"
] | 2022-11-15T20:13:37+00:00 | {"dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32551, "num_examples": 8}], "download_size": 39136, "dataset_size": 32551}} | 2022-11-15T21:57:02+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_whisper_test"
More Information needed | [
"# Dataset Card for \"test_whisper_test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_whisper_test\"\n\nMore Information needed"
] |
300ee6c5e5629d042bfc07cbc406e2f330b53659 |
# Dataset Card for ACL Anthology Corpus
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
This repository provides full-text and metadata to the ACL anthology collection (80k articles/posters as of September 2022) also including .pdf files and grobid extractions of the pdfs.
## How is this different from what ACL anthology provides and what already exists?
- We provide pdfs, full-text, references and other details extracted by grobid from the PDFs while [ACL Anthology](https://aclanthology.org/anthology+abstracts.bib.gz) only provides abstracts.
- There exists a similar corpus call [ACL Anthology Network](https://clair.eecs.umich.edu/aan/about.php) but is now showing its age with just 23k papers from Dec 2016.
```python
>>> import pandas as pd
>>> df = pd.read_parquet('acl-publication-info.74k.parquet')
>>> df
acl_id abstract full_text corpus_paper_id pdf_hash ... number volume journal editor isbn
0 O02-2002 There is a need to measure word similarity whe... There is a need to measure word similarity whe... 18022704 0b09178ac8d17a92f16140365363d8df88c757d0 ... None None None None None
1 L02-1310 8220988 8d5e31610bc82c2abc86bc20ceba684c97e66024 ... None None None None None
2 R13-1042 Thread disentanglement is the task of separati... Thread disentanglement is the task of separati... 16703040 3eb736b17a5acb583b9a9bd99837427753632cdb ... None None None None None
3 W05-0819 In this paper, we describe a word alignment al... In this paper, we describe a word alignment al... 1215281 b20450f67116e59d1348fc472cfc09f96e348f55 ... None None None None None
4 L02-1309 18078432 011e943b64a78dadc3440674419821ee080f0de3 ... None None None None None
... ... ... ... ... ... ... ... ... ... ... ...
73280 P99-1002 This paper describes recent progress and the a... This paper describes recent progress and the a... 715160 ab17a01f142124744c6ae425f8a23011366ec3ee ... None None None None None
73281 P00-1009 We present an LFG-DOP parser which uses fragme... We present an LFG-DOP parser which uses fragme... 1356246 ad005b3fd0c867667118482227e31d9378229751 ... None None None None None
73282 P99-1056 The processes through which readers evoke ment... The processes through which readers evoke ment... 7277828 924cf7a4836ebfc20ee094c30e61b949be049fb6 ... None None None None None
73283 P99-1051 This paper examines the extent to which verb d... This paper examines the extent to which verb d... 1829043 6b1f6f28ee36de69e8afac39461ee1158cd4d49a ... None None None None None
73284 P00-1013 Spoken dialogue managers have benefited from u... Spoken dialogue managers have benefited from u... 10903652 483c818c09e39d9da47103fbf2da8aaa7acacf01 ... None None None None None
[73285 rows x 21 columns]
```
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/shauryr/ACL-anthology-corpus
- **Point of Contact:** [email protected]
### Dataset Summary
Dataframe with extracted metadata (table below with details) and full text of the collection for analysis : **size 489M**
### Languages
en, zh and others
## Dataset Structure
Dataframe
### Data Instances
Each row is a paper from ACL anthology
### Data Fields
| **Column name** | **Description** |
| :---------------: | :---------------------------: |
| `acl_id` | unique ACL id |
| `abstract` | abstract extracted by GROBID |
| `full_text` | full text extracted by GROBID |
| `corpus_paper_id` | Semantic Scholar ID |
| `pdf_hash` | sha1 hash of the pdf |
| `numcitedby` | number of citations from S2 |
| `url` | link of publication |
| `publisher` | - |
| `address` | Address of conference |
| `year` | - |
| `month` | - |
| `booktitle` | - |
| `author` | list of authors |
| `title` | title of paper |
| `pages` | - |
| `doi` | - |
| `number` | - |
| `volume` | - |
| `journal` | - |
| `editor` | - |
| `isbn` | - |
## Dataset Creation
The corpus has all the papers in ACL anthology - as of September'22
### Source Data
- [ACL Anthology](aclanthology.org)
- [Semantic Scholar](semanticscholar.org)
# Additional Information
### Licensing Information
The ACL OCL corpus is released under the [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). By using this corpus, you are agreeing to its usage terms.
### Citation Information
If you use this corpus in your research please use the following BibTeX entry:
@Misc{acl-ocl,
author = {Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen Kan},
title = {The ACL OCL Corpus: advancing Open science in Computational Linguistics},
howpublished = {arXiv},
year = {2022},
url = {https://huggingface.co/datasets/ACL-OCL/ACL-OCL-Corpus}
}
### Acknowledgements
We thank Semantic Scholar for providing access to the citation-related data in this corpus.
### Contributions
Thanks to [@shauryr](https://github.com/shauryr), [Yanxia Qin](https://github.com/qolina) and [Benjamin Aw](https://github.com/Benjamin-Aw-93) for adding this dataset. | WINGNUS/ACL-OCL | [
"task_categories:token-classification",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"research papers",
"acl",
"region:us"
] | 2022-11-15T21:15:08+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": [], "paperswithcode_id": "acronym-identification", "pretty_name": "acl-ocl-corpus", "tags": ["research papers", "acl"], "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]} | 2023-09-20T23:57:32+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #research papers #acl #region-us
| Dataset Card for ACL Anthology Corpus
=====================================
 also including .pdf files and grobid extractions of the pdfs.
How is this different from what ACL anthology provides and what already exists?
-------------------------------------------------------------------------------
* We provide pdfs, full-text, references and other details extracted by grobid from the PDFs while ACL Anthology only provides abstracts.
* There exists a similar corpus call ACL Anthology Network but is now showing its age with just 23k papers from Dec 2016.
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
* Dataset Creation
+ Source Data
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Repository: URL
* Point of Contact: shauryr@URL
### Dataset Summary
Dataframe with extracted metadata (table below with details) and full text of the collection for analysis : size 489M
### Languages
en, zh and others
Dataset Structure
-----------------
Dataframe
### Data Instances
Each row is a paper from ACL anthology
### Data Fields
Dataset Creation
----------------
The corpus has all the papers in ACL anthology - as of September'22
### Source Data
* ACL Anthology
* Semantic Scholar
Additional Information
======================
### Licensing Information
The ACL OCL corpus is released under the CC BY-NC 4.0. By using this corpus, you are agreeing to its usage terms.
If you use this corpus in your research please use the following BibTeX entry:
```
@Misc{acl-ocl,
author = {Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen Kan},
title = {The ACL OCL Corpus: advancing Open science in Computational Linguistics},
howpublished = {arXiv},
year = {2022},
url = {URL
}
```
### Acknowledgements
We thank Semantic Scholar for providing access to the citation-related data in this corpus.
### Contributions
Thanks to @shauryr, Yanxia Qin and Benjamin Aw for adding this dataset.
| [
"### Dataset Summary\n\n\nDataframe with extracted metadata (table below with details) and full text of the collection for analysis : size 489M",
"### Languages\n\n\nen, zh and others\n\n\nDataset Structure\n-----------------\n\n\nDataframe",
"### Data Instances\n\n\nEach row is a paper from ACL anthology",
"### Data Fields\n\n\n\nDataset Creation\n----------------\n\n\nThe corpus has all the papers in ACL anthology - as of September'22",
"### Source Data\n\n\n* ACL Anthology\n* Semantic Scholar\n\n\nAdditional Information\n======================",
"### Licensing Information\n\n\nThe ACL OCL corpus is released under the CC BY-NC 4.0. By using this corpus, you are agreeing to its usage terms.\n\n\nIf you use this corpus in your research please use the following BibTeX entry:\n\n\n\n```\n@Misc{acl-ocl,\n author = {Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen Kan},\n title = {The ACL OCL Corpus: advancing Open science in Computational Linguistics},\n howpublished = {arXiv},\n year = {2022},\n url = {URL\n}\n\n```",
"### Acknowledgements\n\n\nWe thank Semantic Scholar for providing access to the citation-related data in this corpus.",
"### Contributions\n\n\nThanks to @shauryr, Yanxia Qin and Benjamin Aw for adding this dataset."
] | [
"TAGS\n#task_categories-token-classification #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #research papers #acl #region-us \n",
"### Dataset Summary\n\n\nDataframe with extracted metadata (table below with details) and full text of the collection for analysis : size 489M",
"### Languages\n\n\nen, zh and others\n\n\nDataset Structure\n-----------------\n\n\nDataframe",
"### Data Instances\n\n\nEach row is a paper from ACL anthology",
"### Data Fields\n\n\n\nDataset Creation\n----------------\n\n\nThe corpus has all the papers in ACL anthology - as of September'22",
"### Source Data\n\n\n* ACL Anthology\n* Semantic Scholar\n\n\nAdditional Information\n======================",
"### Licensing Information\n\n\nThe ACL OCL corpus is released under the CC BY-NC 4.0. By using this corpus, you are agreeing to its usage terms.\n\n\nIf you use this corpus in your research please use the following BibTeX entry:\n\n\n\n```\n@Misc{acl-ocl,\n author = {Shaurya Rohatgi, Yanxia Qin, Benjamin Aw, Niranjana Unnithan, Min-Yen Kan},\n title = {The ACL OCL Corpus: advancing Open science in Computational Linguistics},\n howpublished = {arXiv},\n year = {2022},\n url = {URL\n}\n\n```",
"### Acknowledgements\n\n\nWe thank Semantic Scholar for providing access to the citation-related data in this corpus.",
"### Contributions\n\n\nThanks to @shauryr, Yanxia Qin and Benjamin Aw for adding this dataset."
] |
e0a908a181ab222d8b8ddb3e75e864ae4a67040d | This dataset for Ukrainian language contains 200 original sentences marked manually with 0 (negative) and 1 (positive). | SergiiGurbych/sent_anal_ukr_binary | [
"region:us"
] | 2022-11-15T23:18:40+00:00 | {} | 2022-11-20T19:18:38+00:00 | [] | [] | TAGS
#region-us
| This dataset for Ukrainian language contains 200 original sentences marked manually with 0 (negative) and 1 (positive). | [] | [
"TAGS\n#region-us \n"
] |
cceb4696560317e920d6512b906263bb425883a1 | Home page & Original source: https://github.com/yasumasaonoe/creak | amydeng2000/CREAK | [
"region:us"
] | 2022-11-16T01:03:14+00:00 | {} | 2023-02-24T01:13:57+00:00 | [] | [] | TAGS
#region-us
| Home page & Original source: URL | [] | [
"TAGS\n#region-us \n"
] |
ea91f2e742ddc5791c57f27b2939a836e43314ba | # Dataset Card for "olm-october-2022-tokenized-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | olm/olm-october-2022-tokenized-512 | [
"region:us"
] | 2022-11-16T01:24:02+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 79589759460, "num_examples": 25807315}], "download_size": 21375344353, "dataset_size": 79589759460}} | 2022-11-16T01:47:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "olm-october-2022-tokenized-512"
More Information needed | [
"# Dataset Card for \"olm-october-2022-tokenized-512\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"olm-october-2022-tokenized-512\"\n\nMore Information needed"
] |
a5afb4e4fb86585ce4fba473c7660db197bbdfe9 | # Dataset Card for "diana_uribe"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/diana_uribe | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"base",
"region:us"
] | 2022-11-16T01:38:32+00:00 | {"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23288573, "num_examples": 370}], "download_size": 11339946, "dataset_size": 23288573}, "tags": ["whisper", "whispering", "base"]} | 2022-11-19T19:57:00+00:00 | [] | [] | TAGS
#task_categories-automatic-speech-recognition #whisper #whispering #base #region-us
| # Dataset Card for "diana_uribe"
More Information needed | [
"# Dataset Card for \"diana_uribe\"\n\nMore Information needed"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #whisper #whispering #base #region-us \n",
"# Dataset Card for \"diana_uribe\"\n\nMore Information needed"
] |
8e54aa032996e146b47b98d91a8ce414a616b554 | # Dataset Card for "olm-october-2022-tokenized-1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | olm/olm-october-2022-tokenized-1024 | [
"region:us"
] | 2022-11-16T02:16:14+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 79468727400, "num_examples": 12909150}], "download_size": 21027268683, "dataset_size": 79468727400}} | 2022-11-16T02:50:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "olm-october-2022-tokenized-1024"
More Information needed | [
"# Dataset Card for \"olm-october-2022-tokenized-1024\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"olm-october-2022-tokenized-1024\"\n\nMore Information needed"
] |
e963e16ce22be14a22b9f9760f5d241935b4d650 |
# Dataset Card for Teyvat BLIP captions
Dataset used to train [Teyvat characters text to image model](https://github.com/hpcaitech/ColossalAI/tree/main/examples/images/diffusion).
BLIP generated captions for characters images from [genshin-impact fandom wiki](https://genshin-impact.fandom.com/wiki/Character#Playable_Characters)and [biligame wiki for genshin impact](https://wiki.biligame.com/ys/%E8%A7%92%E8%89%B2).
For each row the dataset contains `image` and `text` keys. `image` is a varying size PIL png, and `text` is the accompanying text caption. Only a train split is provided.
The `text` include the tag `Teyvat`, `Name`,`Element`, `Weapon`, `Region`, `Model type`, and `Description`, the `Description` is captioned with the [pre-trained BLIP model](https://github.com/salesforce/BLIP).
## Examples
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Ganyu_001.png" title = "Ganyu_001.png" style="max-width: 20%;" >
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Ganyu_002.png" title = "Ganyu_002.png" style="max-width: 20%;" >
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Keqing_003.png" title = "Keqing_003.png" style="max-width: 20%;" >
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:a anime girl with long white hair and blue eyes
<img src = "https://huggingface.co/datasets/Fazzie/Teyvat/resolve/main/data/Keqing_004.png" title = "Keqing_004.png" style="max-width: 20%;" >
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:an anime character wearing a purple dress and cat ears | Fazzie/Teyvat | [
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"language_creators:found",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-16T03:47:33+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-to-image"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 71202, "num_examples": 234}], "download_size": 466995417, "dataset_size": 71202}} | 2022-12-13T02:09:42+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #annotations_creators-no-annotation #language_creators-found #source_datasets-original #language-English #license-unknown #region-us
|
# Dataset Card for Teyvat BLIP captions
Dataset used to train Teyvat characters text to image model.
BLIP generated captions for characters images from genshin-impact fandom wikiand biligame wiki for genshin impact.
For each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL png, and 'text' is the accompanying text caption. Only a train split is provided.
The 'text' include the tag 'Teyvat', 'Name','Element', 'Weapon', 'Region', 'Model type', and 'Description', the 'Description' is captioned with the pre-trained BLIP model.
## Examples
<img src = "URL title = "Ganyu_001.png" style="max-width: 20%;" >
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
<img src = "URL title = "Ganyu_002.png" style="max-width: 20%;" >
> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes
<img src = "URL title = "Keqing_003.png" style="max-width: 20%;" >
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:a anime girl with long white hair and blue eyes
<img src = "URL title = "Keqing_004.png" style="max-width: 20%;" >
> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:an anime character wearing a purple dress and cat ears | [
"# Dataset Card for Teyvat BLIP captions\nDataset used to train Teyvat characters text to image model.\n\nBLIP generated captions for characters images from genshin-impact fandom wikiand biligame wiki for genshin impact.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL png, and 'text' is the accompanying text caption. Only a train split is provided.\n\nThe 'text' include the tag 'Teyvat', 'Name','Element', 'Weapon', 'Region', 'Model type', and 'Description', the 'Description' is captioned with the pre-trained BLIP model.",
"## Examples\n\n<img src = \"URL title = \"Ganyu_001.png\" style=\"max-width: 20%;\" >\n\n> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes\n\n<img src = \"URL title = \"Ganyu_002.png\" style=\"max-width: 20%;\" >\n\n> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes\n\n<img src = \"URL title = \"Keqing_003.png\" style=\"max-width: 20%;\" >\n\n> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:a anime girl with long white hair and blue eyes\n\n<img src = \"URL title = \"Keqing_004.png\" style=\"max-width: 20%;\" >\n\n> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:an anime character wearing a purple dress and cat ears"
] | [
"TAGS\n#task_categories-text-to-image #annotations_creators-no-annotation #language_creators-found #source_datasets-original #language-English #license-unknown #region-us \n",
"# Dataset Card for Teyvat BLIP captions\nDataset used to train Teyvat characters text to image model.\n\nBLIP generated captions for characters images from genshin-impact fandom wikiand biligame wiki for genshin impact.\n\nFor each row the dataset contains 'image' and 'text' keys. 'image' is a varying size PIL png, and 'text' is the accompanying text caption. Only a train split is provided.\n\nThe 'text' include the tag 'Teyvat', 'Name','Element', 'Weapon', 'Region', 'Model type', and 'Description', the 'Description' is captioned with the pre-trained BLIP model.",
"## Examples\n\n<img src = \"URL title = \"Ganyu_001.png\" style=\"max-width: 20%;\" >\n\n> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes\n\n<img src = \"URL title = \"Ganyu_002.png\" style=\"max-width: 20%;\" >\n\n> Teyvat, Name:Ganyu, Element:Cryo, Weapon:Bow, Region:Liyue, Model type:Medium Female, Description:an anime character with blue hair and blue eyes\n\n<img src = \"URL title = \"Keqing_003.png\" style=\"max-width: 20%;\" >\n\n> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:a anime girl with long white hair and blue eyes\n\n<img src = \"URL title = \"Keqing_004.png\" style=\"max-width: 20%;\" >\n\n> Teyvat, Name:Keqing, Element:Electro, Weapon:Sword, Region:Liyue, Model type:Medium Female, Description:an anime character wearing a purple dress and cat ears"
] |
1c510d8fba5836df9983f4600a832f226667892d | # Dataset Card for "espn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | illorg/espn | [
"region:us"
] | 2022-11-16T04:59:06+00:00 | {"dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44761, "num_examples": 4}], "download_size": 28603, "dataset_size": 44761}} | 2022-11-16T04:59:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "espn"
More Information needed | [
"# Dataset Card for \"espn\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"espn\"\n\nMore Information needed"
] |
7e5ded70f2d2bb9ce0119a4c11507aad4205b5f6 | # AutoTrain Dataset for project: mm
## Dataset Description
This dataset has been automatically processed by AutoTrain for project mm.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Email from attorney A Dutkanych regarding executed Settlement Agreement",
"target": "Email from attorney A Dutkanych regarding executed Settlement Agreement"
},
{
"text": "Telephone conference with A Royer regarding additional factual background information relating to O Stapletons Charge of Discrimination allegations",
"target": "Telephone conference with A Royer regarding additional factual background information as to O Stapletons Charge of Discrimination allegations"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 88 |
| valid | 22 |
| alanila/autotrain-data-mm | [
"region:us"
] | 2022-11-16T06:27:09+00:00 | {"task_categories": ["conditional-text-generation"]} | 2022-11-16T06:27:30+00:00 | [] | [] | TAGS
#region-us
| AutoTrain Dataset for project: mm
=================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project mm.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
5fbbc2483212a46d4b9ee29e0eef8ac27c4d77c8 | # Romanian paraphrase dataset
This data set was created by me, special for paraphrase
[t5-small-paraphrase-ro](https://huggingface.co/BlackKakapo/t5-small-paraphrase-ro)
[t5-small-paraphrase-ro-v2](https://huggingface.co/BlackKakapo/t5-small-paraphrase-ro-v2)
[t5-base-paraphrase-ro](https://huggingface.co/BlackKakapo/t5-base-paraphrase-ro)
[t5-base-paraphrase-ro-v2](https://huggingface.co/BlackKakapo/t5-base-paraphrase-ro-v2)
Here you can find ~100k examples of paraphrase. | BlackKakapo/paraphrase-ro | [
"task_categories:text2text-generation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ro",
"license:apache-2.0",
"region:us"
] | 2022-11-16T07:58:38+00:00 | {"language": "ro", "license": "apache-2.0", "multilinguality": "monolingual", "size_categories": "10K<n<100K", "task_categories": ["text2text-generation"]} | 2023-04-19T05:56:17+00:00 | [] | [
"ro"
] | TAGS
#task_categories-text2text-generation #multilinguality-monolingual #size_categories-10K<n<100K #language-Romanian #license-apache-2.0 #region-us
| # Romanian paraphrase dataset
This data set was created by me, special for paraphrase
t5-small-paraphrase-ro
t5-small-paraphrase-ro-v2
t5-base-paraphrase-ro
t5-base-paraphrase-ro-v2
Here you can find ~100k examples of paraphrase. | [
"# Romanian paraphrase dataset\nThis data set was created by me, special for paraphrase\n\nt5-small-paraphrase-ro\nt5-small-paraphrase-ro-v2\nt5-base-paraphrase-ro\nt5-base-paraphrase-ro-v2\n\nHere you can find ~100k examples of paraphrase."
] | [
"TAGS\n#task_categories-text2text-generation #multilinguality-monolingual #size_categories-10K<n<100K #language-Romanian #license-apache-2.0 #region-us \n",
"# Romanian paraphrase dataset\nThis data set was created by me, special for paraphrase\n\nt5-small-paraphrase-ro\nt5-small-paraphrase-ro-v2\nt5-base-paraphrase-ro\nt5-base-paraphrase-ro-v2\n\nHere you can find ~100k examples of paraphrase."
] |
0e212142427b14722bc7ebd85e95fe2ed83dbcc7 | # Romanian grammar dataset
This data set was created by me, special for grammar
Here you can find:
~1600k examples of grammar (TRAIN).
~220k examples of grammar (TEST). | BlackKakapo/grammar-ro | [
"task_categories:text2text-generation",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ro",
"license:apache-2.0",
"region:us"
] | 2022-11-16T08:03:13+00:00 | {"language": "ro", "license": "apache-2.0", "multilinguality": "monolingual", "size_categories": "10K<n<100K", "task_categories": ["text2text-generation"]} | 2023-04-19T05:56:48+00:00 | [] | [
"ro"
] | TAGS
#task_categories-text2text-generation #multilinguality-monolingual #size_categories-10K<n<100K #language-Romanian #license-apache-2.0 #region-us
| # Romanian grammar dataset
This data set was created by me, special for grammar
Here you can find:
~1600k examples of grammar (TRAIN).
~220k examples of grammar (TEST). | [
"# Romanian grammar dataset\nThis data set was created by me, special for grammar \n\n\nHere you can find:\n~1600k examples of grammar (TRAIN).\n~220k examples of grammar (TEST)."
] | [
"TAGS\n#task_categories-text2text-generation #multilinguality-monolingual #size_categories-10K<n<100K #language-Romanian #license-apache-2.0 #region-us \n",
"# Romanian grammar dataset\nThis data set was created by me, special for grammar \n\n\nHere you can find:\n~1600k examples of grammar (TRAIN).\n~220k examples of grammar (TEST)."
] |
6331ea3b86d2c8f414dc60da4a1a6d6f560df0cf | # Dataset Card for "whisper-transcripts-linustechtips"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Whispering-GPT](https://github.com/matallanas/whisper_gpt_pipeline)
- **Repository:** [whisper_gpt_pipeline](https://github.com/matallanas/whisper_gpt_pipeline)
- **Paper:** [whisper](https://cdn.openai.com/papers/whisper.pdf) and [gpt](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
- **Point of Contact:** [Whispering-GPT organization](https://huggingface.co/Whispering-GPT)
### Dataset Summary
This dataset is created by applying whisper to the videos of the Youtube channel [Linus Tech Tips](https://www.youtube.com/channel/UCXuqSBlHAE6Xw-yeJA0Tunw). The dataset was created a medium size whisper model.
### Languages
- **Language**: English
## Dataset Structure
The dataset
### Data Fields
The dataset is composed by:
- **id**: Id of the youtube video.
- **channel**: Name of the channel.
- **channel\_id**: Id of the youtube channel.
- **title**: Title given to the video.
- **categories**: Category of the video.
- **description**: Description added by the author.
- **text**: Whole transcript of the video.
- **segments**: A list with the time and transcription of the video.
- **start**: When started the trancription.
- **end**: When the transcription ends.
- **text**: The text of the transcription.
### Data Splits
- Train split.
## Dataset Creation
### Source Data
The transcriptions are from the videos of [Linus Tech Tips Channel](https://www.youtube.com/channel/UCXuqSBlHAE6Xw-yeJA0Tunw)
### Contributions
Thanks to [Whispering-GPT](https://huggingface.co/Whispering-GPT) organization for adding this dataset. | Whispering-GPT/whisper-transcripts-linustechtips | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"medium",
"region:us"
] | 2022-11-16T08:29:52+00:00 | {"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "categories", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "segments", "list": [{"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 177776633.92326075, "num_examples": 5655}], "download_size": 100975518, "dataset_size": 177776633.92326075}, "tags": ["whisper", "whispering", "medium"]} | 2022-12-06T13:10:26+00:00 | [] | [] | TAGS
#task_categories-automatic-speech-recognition #whisper #whispering #medium #region-us
| # Dataset Card for "whisper-transcripts-linustechtips"
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Contributions
## Dataset Description
- Homepage: Whispering-GPT
- Repository: whisper_gpt_pipeline
- Paper: whisper and gpt
- Point of Contact: Whispering-GPT organization
### Dataset Summary
This dataset is created by applying whisper to the videos of the Youtube channel Linus Tech Tips. The dataset was created a medium size whisper model.
### Languages
- Language: English
## Dataset Structure
The dataset
### Data Fields
The dataset is composed by:
- id: Id of the youtube video.
- channel: Name of the channel.
- channel\_id: Id of the youtube channel.
- title: Title given to the video.
- categories: Category of the video.
- description: Description added by the author.
- text: Whole transcript of the video.
- segments: A list with the time and transcription of the video.
- start: When started the trancription.
- end: When the transcription ends.
- text: The text of the transcription.
### Data Splits
- Train split.
## Dataset Creation
### Source Data
The transcriptions are from the videos of Linus Tech Tips Channel
### Contributions
Thanks to Whispering-GPT organization for adding this dataset. | [
"# Dataset Card for \"whisper-transcripts-linustechtips\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Whispering-GPT\n- Repository: whisper_gpt_pipeline\n- Paper: whisper and gpt\n- Point of Contact: Whispering-GPT organization",
"### Dataset Summary\n\nThis dataset is created by applying whisper to the videos of the Youtube channel Linus Tech Tips. The dataset was created a medium size whisper model.",
"### Languages\n\n- Language: English",
"## Dataset Structure\n\nThe dataset",
"### Data Fields\n\nThe dataset is composed by:\n- id: Id of the youtube video.\n- channel: Name of the channel.\n- channel\\_id: Id of the youtube channel.\n- title: Title given to the video.\n- categories: Category of the video.\n- description: Description added by the author.\n- text: Whole transcript of the video.\n- segments: A list with the time and transcription of the video.\n - start: When started the trancription.\n - end: When the transcription ends.\n - text: The text of the transcription.",
"### Data Splits\n\n- Train split.",
"## Dataset Creation",
"### Source Data\n\nThe transcriptions are from the videos of Linus Tech Tips Channel",
"### Contributions\n\nThanks to Whispering-GPT organization for adding this dataset."
] | [
"TAGS\n#task_categories-automatic-speech-recognition #whisper #whispering #medium #region-us \n",
"# Dataset Card for \"whisper-transcripts-linustechtips\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Whispering-GPT\n- Repository: whisper_gpt_pipeline\n- Paper: whisper and gpt\n- Point of Contact: Whispering-GPT organization",
"### Dataset Summary\n\nThis dataset is created by applying whisper to the videos of the Youtube channel Linus Tech Tips. The dataset was created a medium size whisper model.",
"### Languages\n\n- Language: English",
"## Dataset Structure\n\nThe dataset",
"### Data Fields\n\nThe dataset is composed by:\n- id: Id of the youtube video.\n- channel: Name of the channel.\n- channel\\_id: Id of the youtube channel.\n- title: Title given to the video.\n- categories: Category of the video.\n- description: Description added by the author.\n- text: Whole transcript of the video.\n- segments: A list with the time and transcription of the video.\n - start: When started the trancription.\n - end: When the transcription ends.\n - text: The text of the transcription.",
"### Data Splits\n\n- Train split.",
"## Dataset Creation",
"### Source Data\n\nThe transcriptions are from the videos of Linus Tech Tips Channel",
"### Contributions\n\nThanks to Whispering-GPT organization for adding this dataset."
] |
40ea8f976ff90ee137ac6ea16eeebf36fd33c8ce | # Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/venelink/ETPC/
- **Repository:**
- **Paper:** [ETPC - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation](http://www.lrec-conf.org/proceedings/lrec2018/pdf/661.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
We present the Extended Paraphrase Typology (EPT) and the Extended Typology Paraphrase Corpus (ETPC). The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. It is the first corpus with detailed annotation of both the paraphrase and the non-paraphrase pairs and the first corpus annotated with paraphrase and negation. Both new resources contribute to better understanding the paraphrase phenomenon, and allow for studying the relationship between paraphrasing and negation. To the developers of Paraphrase Identification systems ETPC corpus offers better means for evaluation and error analysis. Furthermore, the EPT typology and ETPC corpus emphasize the relationship with other areas of NLP such as Semantic Similarity, Textual Entailment, Summarization and Simplification.
### Supported Tasks and Leaderboards
- `text-classification`
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Fields
- `idx`: Monotonically increasing index ID.
- `sentence1`: Complete sentence expressing an opinion about a film.
- `sentence2`: Complete sentence expressing an opinion about a film.
- `etpc_label`: Whether the text-pair is a paraphrase, either "yes" (1) or "no" (0) according to etpc annotation schema.
- `mrpc_label`: Whether the text-pair is a paraphrase, either "yes" (1) or "no" (0) according to mrpc annotation schema.
- `negation`: Whether on sentence is a negation of another, either "yes" (1) or "no" (0).
### Data Splits
train: 5801
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Rotten Tomatoes reviewers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
```bibtex
@inproceedings{kovatchev-etal-2018-etpc,
title = "{ETPC} - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation",
author = "Kovatchev, Venelin and
Mart{\'\i}, M. Ant{\`o}nia and
Salam{\'o}, Maria",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1221",
}
```
### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset. | jpwahle/etpc | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-16T08:54:46+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Extended Paraphrase Typology Corpus"} | 2023-10-02T15:05:00+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #region-us
| # Dataset Card for [Dataset Name]
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Initial Data Collection and Normalization
- Who are the source language producers?
- Annotations
- Annotation process
- Who are the annotators?
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper: ETPC - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation
- Leaderboard:
- Point of Contact:
### Dataset Summary
We present the Extended Paraphrase Typology (EPT) and the Extended Typology Paraphrase Corpus (ETPC). The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. It is the first corpus with detailed annotation of both the paraphrase and the non-paraphrase pairs and the first corpus annotated with paraphrase and negation. Both new resources contribute to better understanding the paraphrase phenomenon, and allow for studying the relationship between paraphrasing and negation. To the developers of Paraphrase Identification systems ETPC corpus offers better means for evaluation and error analysis. Furthermore, the EPT typology and ETPC corpus emphasize the relationship with other areas of NLP such as Semantic Similarity, Textual Entailment, Summarization and Simplification.
### Supported Tasks and Leaderboards
- 'text-classification'
### Languages
The text in the dataset is in English ('en').
## Dataset Structure
### Data Fields
- 'idx': Monotonically increasing index ID.
- 'sentence1': Complete sentence expressing an opinion about a film.
- 'sentence2': Complete sentence expressing an opinion about a film.
- 'etpc_label': Whether the text-pair is a paraphrase, either "yes" (1) or "no" (0) according to etpc annotation schema.
- 'mrpc_label': Whether the text-pair is a paraphrase, either "yes" (1) or "no" (0) according to mrpc annotation schema.
- 'negation': Whether on sentence is a negation of another, either "yes" (1) or "no" (0).
### Data Splits
train: 5801
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Rotten Tomatoes reviewers.
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Unknown.
### Contributions
Thanks to @jpwahle for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository:\n- Paper: ETPC - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\nWe present the Extended Paraphrase Typology (EPT) and the Extended Typology Paraphrase Corpus (ETPC). The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. It is the first corpus with detailed annotation of both the paraphrase and the non-paraphrase pairs and the first corpus annotated with paraphrase and negation. Both new resources contribute to better understanding the paraphrase phenomenon, and allow for studying the relationship between paraphrasing and negation. To the developers of Paraphrase Identification systems ETPC corpus offers better means for evaluation and error analysis. Furthermore, the EPT typology and ETPC corpus emphasize the relationship with other areas of NLP such as Semantic Similarity, Textual Entailment, Summarization and Simplification.",
"### Supported Tasks and Leaderboards\n- 'text-classification'",
"### Languages\nThe text in the dataset is in English ('en').",
"## Dataset Structure",
"### Data Fields\n- 'idx': Monotonically increasing index ID.\n- 'sentence1': Complete sentence expressing an opinion about a film.\n- 'sentence2': Complete sentence expressing an opinion about a film.\n- 'etpc_label': Whether the text-pair is a paraphrase, either \"yes\" (1) or \"no\" (0) according to etpc annotation schema.\n- 'mrpc_label': Whether the text-pair is a paraphrase, either \"yes\" (1) or \"no\" (0) according to mrpc annotation schema.\n- 'negation': Whether on sentence is a negation of another, either \"yes\" (1) or \"no\" (0).",
"### Data Splits\ntrain: 5801",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\nRotten Tomatoes reviewers.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nUnknown.",
"### Contributions\nThanks to @jpwahle for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-unknown #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository:\n- Paper: ETPC - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\nWe present the Extended Paraphrase Typology (EPT) and the Extended Typology Paraphrase Corpus (ETPC). The EPT typology addresses several practical limitations of existing paraphrase typologies: it is the first typology that copes with the non-paraphrase pairs in the paraphrase identification corpora and distinguishes between contextual and habitual paraphrase types. ETPC is the largest corpus to date annotated with atomic paraphrase types. It is the first corpus with detailed annotation of both the paraphrase and the non-paraphrase pairs and the first corpus annotated with paraphrase and negation. Both new resources contribute to better understanding the paraphrase phenomenon, and allow for studying the relationship between paraphrasing and negation. To the developers of Paraphrase Identification systems ETPC corpus offers better means for evaluation and error analysis. Furthermore, the EPT typology and ETPC corpus emphasize the relationship with other areas of NLP such as Semantic Similarity, Textual Entailment, Summarization and Simplification.",
"### Supported Tasks and Leaderboards\n- 'text-classification'",
"### Languages\nThe text in the dataset is in English ('en').",
"## Dataset Structure",
"### Data Fields\n- 'idx': Monotonically increasing index ID.\n- 'sentence1': Complete sentence expressing an opinion about a film.\n- 'sentence2': Complete sentence expressing an opinion about a film.\n- 'etpc_label': Whether the text-pair is a paraphrase, either \"yes\" (1) or \"no\" (0) according to etpc annotation schema.\n- 'mrpc_label': Whether the text-pair is a paraphrase, either \"yes\" (1) or \"no\" (0) according to mrpc annotation schema.\n- 'negation': Whether on sentence is a negation of another, either \"yes\" (1) or \"no\" (0).",
"### Data Splits\ntrain: 5801",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\nRotten Tomatoes reviewers.",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nUnknown.",
"### Contributions\nThanks to @jpwahle for adding this dataset."
] |
7d09ef7987036af7b3c83a9375e4ee030891c616 |
# Dataset Card for ATCOSIM corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages and Other Details](#languages-and-other-details)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ATCOSIM homepage](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)
- **Repository:** [GitHub repository (used in research)](https://github.com/idiap/w2v2-air-traffic)
- **Paper:** [The ATCOSIM Corpus of Non-Prompted Clean Air Traffic Control Speech](https://aclanthology.org/L08-1507/)
- **Paper of this research:** [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822)
### Dataset Summary
The ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native speakers. The database includes orthographic transcriptions and additional information on speakers and recording sessions. It was recorded and annotated by Konrad Hofbauer ([description here](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html)).
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`. Already adapted/fine-tuned models are available here --> [XLS-R-300m](https://huggingface.co/Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-atcosim).
### Languages and other details
The text and the recordings are in English. The participating controllers were all actively employed air traffic controllers and possessed professional experience in the simulated sectors. The six male and four female controllers were of either German or Swiss nationality and had German, Swiss German or Swiss French native tongue. The controllers had agreed to the recording of their voice for the purpose of language analysis as well as for research and development in speech technologies, and were asked to show their normal working behaviour.
## Dataset Structure
### Data Fields
- `id (string)`: a string of recording identifier for each example, corresponding to its.
- `audio (audio)`: audio data for the given ID
- `text (string)`: transcript of the file already normalized. Follow these repositories for more details [w2v2-air-traffic](https://github.com/idiap/w2v2-air-traffic) and [bert-text-diarization-atc](https://github.com/idiap/bert-text-diarization-atc)
- `segment_start_time (float32)`: segment start time (normally 0)
- `segment_end_time (float32): segment end time
- `duration (float32)`: duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the dataset hinges on the legal status of the [ATCOSIM corpus](https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html) creators.
### Citation Information
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
Authors of the dataset:
```
@inproceedings{hofbauer-etal-2008-atcosim,
title = "The {ATCOSIM} Corpus of Non-Prompted Clean Air Traffic Control Speech",
author = "Hofbauer, Konrad and
Petrik, Stefan and
Hering, Horst",
booktitle = "Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}'08)",
month = may,
year = "2008",
address = "Marrakech, Morocco",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2008/pdf/545_paper.pdf",
}
```
| Jzuluaga/atcosim_corpus | [
"task_categories:automatic-speech-recognition",
"multilinguality:monolingual",
"language:en",
"audio",
"automatic-speech-recognition",
"en-atc",
"en",
"robust-speech-recognition",
"noisy-speech-recognition",
"speech-recognition",
"arxiv:2203.16822",
"region:us"
] | 2022-11-16T09:04:42+00:00 | {"language": ["en"], "multilinguality": ["monolingual"], "task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "segment_start_time", "dtype": "float32"}, {"name": "segment_end_time", "dtype": "float32"}, {"name": "duration", "dtype": "float32"}], "splits": [{"name": "test", "num_bytes": 471628915.76, "num_examples": 1901}, {"name": "train", "num_bytes": 1934757106.88, "num_examples": 7638}], "download_size": 0, "dataset_size": 2406386022.6400003}, "tags": ["audio", "automatic-speech-recognition", "en-atc", "en", "robust-speech-recognition", "noisy-speech-recognition", "speech-recognition"]} | 2022-12-05T11:14:57+00:00 | [
"2203.16822"
] | [
"en"
] | TAGS
#task_categories-automatic-speech-recognition #multilinguality-monolingual #language-English #audio #automatic-speech-recognition #en-atc #en #robust-speech-recognition #noisy-speech-recognition #speech-recognition #arxiv-2203.16822 #region-us
|
# Dataset Card for ATCOSIM corpus
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages and Other Details
- Dataset Structure
- Data Fields
- Additional Information
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: ATCOSIM homepage
- Repository: GitHub repository (used in research)
- Paper: The ATCOSIM Corpus of Non-Prompted Clean Air Traffic Control Speech
- Paper of this research: How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications
### Dataset Summary
The ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native speakers. The database includes orthographic transcriptions and additional information on speakers and recording sessions. It was recorded and annotated by Konrad Hofbauer (description here).
### Supported Tasks and Leaderboards
- 'automatic-speech-recognition'. Already adapted/fine-tuned models are available here --> XLS-R-300m.
### Languages and other details
The text and the recordings are in English. The participating controllers were all actively employed air traffic controllers and possessed professional experience in the simulated sectors. The six male and four female controllers were of either German or Swiss nationality and had German, Swiss German or Swiss French native tongue. The controllers had agreed to the recording of their voice for the purpose of language analysis as well as for research and development in speech technologies, and were asked to show their normal working behaviour.
## Dataset Structure
### Data Fields
- 'id (string)': a string of recording identifier for each example, corresponding to its.
- 'audio (audio)': audio data for the given ID
- 'text (string)': transcript of the file already normalized. Follow these repositories for more details w2v2-air-traffic and bert-text-diarization-atc
- 'segment_start_time (float32)': segment start time (normally 0)
- 'segment_end_time (float32): segment end time
- 'duration (float32)': duration of the recording, compute as segment_end_time - segment_start_time
## Additional Information
### Licensing Information
The licensing status of the dataset hinges on the legal status of the ATCOSIM corpus creators.
Contributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:
Authors of the dataset:
| [
"# Dataset Card for ATCOSIM corpus",
"## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages and Other Details\n- Dataset Structure\n - Data Fields\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n- Homepage: ATCOSIM homepage\n- Repository: GitHub repository (used in research)\n- Paper: The ATCOSIM Corpus of Non-Prompted Clean Air Traffic Control Speech\n- Paper of this research: How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications",
"### Dataset Summary\n\nThe ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native speakers. The database includes orthographic transcriptions and additional information on speakers and recording sessions. It was recorded and annotated by Konrad Hofbauer (description here).",
"### Supported Tasks and Leaderboards\n\n- 'automatic-speech-recognition'. Already adapted/fine-tuned models are available here --> XLS-R-300m.",
"### Languages and other details\nThe text and the recordings are in English. The participating controllers were all actively employed air traffic controllers and possessed professional experience in the simulated sectors. The six male and four female controllers were of either German or Swiss nationality and had German, Swiss German or Swiss French native tongue. The controllers had agreed to the recording of their voice for the purpose of language analysis as well as for research and development in speech technologies, and were asked to show their normal working behaviour.",
"## Dataset Structure",
"### Data Fields\n\n- 'id (string)': a string of recording identifier for each example, corresponding to its.\n- 'audio (audio)': audio data for the given ID\n- 'text (string)': transcript of the file already normalized. Follow these repositories for more details w2v2-air-traffic and bert-text-diarization-atc\n- 'segment_start_time (float32)': segment start time (normally 0)\n- 'segment_end_time (float32): segment end time\n- 'duration (float32)': duration of the recording, compute as segment_end_time - segment_start_time",
"## Additional Information",
"### Licensing Information\n\nThe licensing status of the dataset hinges on the legal status of the ATCOSIM corpus creators.\n\n\n\nContributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:\n\n\n\n\nAuthors of the dataset:"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #multilinguality-monolingual #language-English #audio #automatic-speech-recognition #en-atc #en #robust-speech-recognition #noisy-speech-recognition #speech-recognition #arxiv-2203.16822 #region-us \n",
"# Dataset Card for ATCOSIM corpus",
"## Table of Contents\n\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages and Other Details\n- Dataset Structure\n - Data Fields\n- Additional Information\n - Licensing Information\n - Citation Information",
"## Dataset Description\n- Homepage: ATCOSIM homepage\n- Repository: GitHub repository (used in research)\n- Paper: The ATCOSIM Corpus of Non-Prompted Clean Air Traffic Control Speech\n- Paper of this research: How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications",
"### Dataset Summary\n\nThe ATCOSIM Air Traffic Control Simulation Speech corpus is a speech database of air traffic control (ATC) operator speech, provided by Graz University of Technology (TUG) and Eurocontrol Experimental Centre (EEC). It consists of ten hours of speech data, which were recorded during ATC real-time simulations using a close-talk headset microphone. The utterances are in English language and pronounced by ten non-native speakers. The database includes orthographic transcriptions and additional information on speakers and recording sessions. It was recorded and annotated by Konrad Hofbauer (description here).",
"### Supported Tasks and Leaderboards\n\n- 'automatic-speech-recognition'. Already adapted/fine-tuned models are available here --> XLS-R-300m.",
"### Languages and other details\nThe text and the recordings are in English. The participating controllers were all actively employed air traffic controllers and possessed professional experience in the simulated sectors. The six male and four female controllers were of either German or Swiss nationality and had German, Swiss German or Swiss French native tongue. The controllers had agreed to the recording of their voice for the purpose of language analysis as well as for research and development in speech technologies, and were asked to show their normal working behaviour.",
"## Dataset Structure",
"### Data Fields\n\n- 'id (string)': a string of recording identifier for each example, corresponding to its.\n- 'audio (audio)': audio data for the given ID\n- 'text (string)': transcript of the file already normalized. Follow these repositories for more details w2v2-air-traffic and bert-text-diarization-atc\n- 'segment_start_time (float32)': segment start time (normally 0)\n- 'segment_end_time (float32): segment end time\n- 'duration (float32)': duration of the recording, compute as segment_end_time - segment_start_time",
"## Additional Information",
"### Licensing Information\n\nThe licensing status of the dataset hinges on the legal status of the ATCOSIM corpus creators.\n\n\n\nContributors who prepared, processed, normalized and uploaded the dataset in HuggingFace:\n\n\n\n\nAuthors of the dataset:"
] |
d4bfcca433547321d83ef9718b645805087bf70d |
# Dataset Card for Danish WIT
## Dataset Description
- **Repository:** <https://gist.github.com/saattrupdan/bb6c9c52d9f4b35258db2b2456d31224>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
### Dataset Summary
Google presented the Wikipedia Image Text (WIT) dataset in [July
2021](https://dl.acm.org/doi/abs/10.1145/3404835.3463257), a dataset which contains
scraped images from Wikipedia along with their descriptions. WikiMedia released
WIT-Base in [September
2021](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/),
being a modified version of WIT where they have removed the images with empty
"reference descriptions", as well as removing images where a person's face covers more
than 10% of the image surface, along with inappropriate images that are candidate for
deletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of
roughly 160,000 images with associated Danish descriptions. We release the dataset
under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/), in
accordance with WIT-Base's [identical
license](https://huggingface.co/datasets/wikimedia/wit_base#licensing-information).
### Supported Tasks and Leaderboards
Training machine learning models for caption generation, zero-shot image classification
and text-image search are the intended tasks for this dataset. No leaderboard is active
at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
An example from the `train` split looks as follows.
```
{
"image": {
"bytes": b"\xff\xd8\xff\xe0\x00\x10JFIF...",
"path": None
},
"image_url": "https://upload.wikimedia.org/wikipedia/commons/4/45/Bispen_-_inside.jpg",
"embedding": [2.8568285, 2.9562542, 0.33794892, 8.753725, ...],
"metadata_url": "http://commons.wikimedia.org/wiki/File:Bispen_-_inside.jpg",
"original_height": 3161,
"original_width": 2316,
"mime_type": "image/jpeg",
"caption_attribution_description": "Kulturhuset Bispen set indefra. Biblioteket er til venstre",
"page_url": "https://da.wikipedia.org/wiki/Bispen",
"attribution_passes_lang_id": True,
"caption_alt_text_description": None,
"caption_reference_description": "Bispen set indefra fra 1. sal, hvor ....",
"caption_title_and_reference_description": "Bispen [SEP] Bispen set indefra ...",
"context_page_description": "Bispen er navnet på det offentlige kulturhus i ...",
"context_section_description": "Bispen er navnet på det offentlige kulturhus i ...",
"hierarchical_section_title": "Bispen",
"is_main_image": True,
"page_changed_recently": True,
"page_title": "Bispen",
"section_title": None
}
```
### Data Fields
The data fields are the same among all splits.
- `image`: a `dict` feature.
- `image_url`: a `str` feature.
- `embedding`: a `list` feature.
- `metadata_url`: a `str` feature.
- `original_height`: an `int` or `NaN` feature.
- `original_width`: an `int` or `NaN` feature.
- `mime_type`: a `str` or `None` feature.
- `caption_attribution_description`: a `str` or `None` feature.
- `page_url`: a `str` feature.
- `attribution_passes_lang_id`: a `bool` or `None` feature.
- `caption_alt_text_description`: a `str` or `None` feature.
- `caption_reference_description`: a `str` or `None` feature.
- `caption_title_and_reference_description`: a `str` or `None` feature.
- `context_page_description`: a `str` or `None` feature.
- `context_section_description`: a `str` or `None` feature.
- `hierarchical_section_title`: a `str` feature.
- `is_main_image`: a `bool` or `None` feature.
- `page_changed_recently`: a `bool` or `None` feature.
- `page_title`: a `str` feature.
- `section_title`: a `str` or `None` feature.
### Data Splits
Roughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split
the resulting 168,740 samples into a training set, validation set and testing set of
the following sizes:
| split | samples |
|---------|--------:|
| train | 167,460 |
| val | 256 |
| test | 1,024 |
## Dataset Creation
### Curation Rationale
It is quite cumbersome to extract the Danish portion of the WIT-Base dataset,
especially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT
is purely to make it easier to work with the Danish portion of it.
### Source Data
The original data was collected from WikiMedia's
[WIT-Base](https://huggingface.co/datasets/wikimedia/wit_base) dataset, which in turn
comes from Google's [WIT](https://huggingface.co/datasets/google/wit) dataset.
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/).
| severo/danish-wit | [
"task_categories:image-to-text",
"task_categories:zero-shot-image-classification",
"task_categories:feature-extraction",
"task_ids:image-captioning",
"size_categories:100K<n<1M",
"source_datasets:wikimedia/wit_base",
"language:da",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-11-16T09:07:30+00:00 | {"language": ["da"], "license": ["cc-by-sa-4.0"], "size_categories": ["100K<n<1M"], "source_datasets": ["wikimedia/wit_base"], "task_categories": ["image-to-text", "zero-shot-image-classification", "feature-extraction"], "task_ids": ["image-captioning"], "pretty_name": "Danish WIT"} | 2022-11-14T11:01:24+00:00 | [] | [
"da"
] | TAGS
#task_categories-image-to-text #task_categories-zero-shot-image-classification #task_categories-feature-extraction #task_ids-image-captioning #size_categories-100K<n<1M #source_datasets-wikimedia/wit_base #language-Danish #license-cc-by-sa-4.0 #region-us
| Dataset Card for Danish WIT
===========================
Dataset Description
-------------------
* Repository: <URL
* Point of Contact: Dan Saattrup Nielsen
* Size of downloaded dataset files: 7.5 GB
* Size of the generated dataset: 7.8 GB
* Total amount of disk used: 15.3 GB
### Dataset Summary
Google presented the Wikipedia Image Text (WIT) dataset in July
2021, a dataset which contains
scraped images from Wikipedia along with their descriptions. WikiMedia released
WIT-Base in September
2021,
being a modified version of WIT where they have removed the images with empty
"reference descriptions", as well as removing images where a person's face covers more
than 10% of the image surface, along with inappropriate images that are candidate for
deletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of
roughly 160,000 images with associated Danish descriptions. We release the dataset
under the CC BY-SA 4.0 license, in
accordance with WIT-Base's identical
license.
### Supported Tasks and Leaderboards
Training machine learning models for caption generation, zero-shot image classification
and text-image search are the intended tasks for this dataset. No leaderboard is active
at this point.
### Languages
The dataset is available in Danish ('da').
Dataset Structure
-----------------
### Data Instances
* Size of downloaded dataset files: 7.5 GB
* Size of the generated dataset: 7.8 GB
* Total amount of disk used: 15.3 GB
An example from the 'train' split looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'image': a 'dict' feature.
* 'image\_url': a 'str' feature.
* 'embedding': a 'list' feature.
* 'metadata\_url': a 'str' feature.
* 'original\_height': an 'int' or 'NaN' feature.
* 'original\_width': an 'int' or 'NaN' feature.
* 'mime\_type': a 'str' or 'None' feature.
* 'caption\_attribution\_description': a 'str' or 'None' feature.
* 'page\_url': a 'str' feature.
* 'attribution\_passes\_lang\_id': a 'bool' or 'None' feature.
* 'caption\_alt\_text\_description': a 'str' or 'None' feature.
* 'caption\_reference\_description': a 'str' or 'None' feature.
* 'caption\_title\_and\_reference\_description': a 'str' or 'None' feature.
* 'context\_page\_description': a 'str' or 'None' feature.
* 'context\_section\_description': a 'str' or 'None' feature.
* 'hierarchical\_section\_title': a 'str' feature.
* 'is\_main\_image': a 'bool' or 'None' feature.
* 'page\_changed\_recently': a 'bool' or 'None' feature.
* 'page\_title': a 'str' feature.
* 'section\_title': a 'str' or 'None' feature.
### Data Splits
Roughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split
the resulting 168,740 samples into a training set, validation set and testing set of
the following sizes:
Dataset Creation
----------------
### Curation Rationale
It is quite cumbersome to extract the Danish portion of the WIT-Base dataset,
especially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT
is purely to make it easier to work with the Danish portion of it.
### Source Data
The original data was collected from WikiMedia's
WIT-Base dataset, which in turn
comes from Google's WIT dataset.
Additional Information
----------------------
### Dataset Curators
Dan Saattrup Nielsen from the The Alexandra
Institute curated this dataset.
### Licensing Information
The dataset is licensed under the CC BY-SA 4.0
license.
| [
"### Dataset Summary\n\n\nGoogle presented the Wikipedia Image Text (WIT) dataset in July\n2021, a dataset which contains\nscraped images from Wikipedia along with their descriptions. WikiMedia released\nWIT-Base in September\n2021,\nbeing a modified version of WIT where they have removed the images with empty\n\"reference descriptions\", as well as removing images where a person's face covers more\nthan 10% of the image surface, along with inappropriate images that are candidate for\ndeletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of\nroughly 160,000 images with associated Danish descriptions. We release the dataset\nunder the CC BY-SA 4.0 license, in\naccordance with WIT-Base's identical\nlicense.",
"### Supported Tasks and Leaderboards\n\n\nTraining machine learning models for caption generation, zero-shot image classification\nand text-image search are the intended tasks for this dataset. No leaderboard is active\nat this point.",
"### Languages\n\n\nThe dataset is available in Danish ('da').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of downloaded dataset files: 7.5 GB\n* Size of the generated dataset: 7.8 GB\n* Total amount of disk used: 15.3 GB\n\n\nAn example from the 'train' split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'image': a 'dict' feature.\n* 'image\\_url': a 'str' feature.\n* 'embedding': a 'list' feature.\n* 'metadata\\_url': a 'str' feature.\n* 'original\\_height': an 'int' or 'NaN' feature.\n* 'original\\_width': an 'int' or 'NaN' feature.\n* 'mime\\_type': a 'str' or 'None' feature.\n* 'caption\\_attribution\\_description': a 'str' or 'None' feature.\n* 'page\\_url': a 'str' feature.\n* 'attribution\\_passes\\_lang\\_id': a 'bool' or 'None' feature.\n* 'caption\\_alt\\_text\\_description': a 'str' or 'None' feature.\n* 'caption\\_reference\\_description': a 'str' or 'None' feature.\n* 'caption\\_title\\_and\\_reference\\_description': a 'str' or 'None' feature.\n* 'context\\_page\\_description': a 'str' or 'None' feature.\n* 'context\\_section\\_description': a 'str' or 'None' feature.\n* 'hierarchical\\_section\\_title': a 'str' feature.\n* 'is\\_main\\_image': a 'bool' or 'None' feature.\n* 'page\\_changed\\_recently': a 'bool' or 'None' feature.\n* 'page\\_title': a 'str' feature.\n* 'section\\_title': a 'str' or 'None' feature.",
"### Data Splits\n\n\nRoughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split\nthe resulting 168,740 samples into a training set, validation set and testing set of\nthe following sizes:\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nIt is quite cumbersome to extract the Danish portion of the WIT-Base dataset,\nespecially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT\nis purely to make it easier to work with the Danish portion of it.",
"### Source Data\n\n\nThe original data was collected from WikiMedia's\nWIT-Base dataset, which in turn\ncomes from Google's WIT dataset.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDan Saattrup Nielsen from the The Alexandra\nInstitute curated this dataset.",
"### Licensing Information\n\n\nThe dataset is licensed under the CC BY-SA 4.0\nlicense."
] | [
"TAGS\n#task_categories-image-to-text #task_categories-zero-shot-image-classification #task_categories-feature-extraction #task_ids-image-captioning #size_categories-100K<n<1M #source_datasets-wikimedia/wit_base #language-Danish #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nGoogle presented the Wikipedia Image Text (WIT) dataset in July\n2021, a dataset which contains\nscraped images from Wikipedia along with their descriptions. WikiMedia released\nWIT-Base in September\n2021,\nbeing a modified version of WIT where they have removed the images with empty\n\"reference descriptions\", as well as removing images where a person's face covers more\nthan 10% of the image surface, along with inappropriate images that are candidate for\ndeletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of\nroughly 160,000 images with associated Danish descriptions. We release the dataset\nunder the CC BY-SA 4.0 license, in\naccordance with WIT-Base's identical\nlicense.",
"### Supported Tasks and Leaderboards\n\n\nTraining machine learning models for caption generation, zero-shot image classification\nand text-image search are the intended tasks for this dataset. No leaderboard is active\nat this point.",
"### Languages\n\n\nThe dataset is available in Danish ('da').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of downloaded dataset files: 7.5 GB\n* Size of the generated dataset: 7.8 GB\n* Total amount of disk used: 15.3 GB\n\n\nAn example from the 'train' split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'image': a 'dict' feature.\n* 'image\\_url': a 'str' feature.\n* 'embedding': a 'list' feature.\n* 'metadata\\_url': a 'str' feature.\n* 'original\\_height': an 'int' or 'NaN' feature.\n* 'original\\_width': an 'int' or 'NaN' feature.\n* 'mime\\_type': a 'str' or 'None' feature.\n* 'caption\\_attribution\\_description': a 'str' or 'None' feature.\n* 'page\\_url': a 'str' feature.\n* 'attribution\\_passes\\_lang\\_id': a 'bool' or 'None' feature.\n* 'caption\\_alt\\_text\\_description': a 'str' or 'None' feature.\n* 'caption\\_reference\\_description': a 'str' or 'None' feature.\n* 'caption\\_title\\_and\\_reference\\_description': a 'str' or 'None' feature.\n* 'context\\_page\\_description': a 'str' or 'None' feature.\n* 'context\\_section\\_description': a 'str' or 'None' feature.\n* 'hierarchical\\_section\\_title': a 'str' feature.\n* 'is\\_main\\_image': a 'bool' or 'None' feature.\n* 'page\\_changed\\_recently': a 'bool' or 'None' feature.\n* 'page\\_title': a 'str' feature.\n* 'section\\_title': a 'str' or 'None' feature.",
"### Data Splits\n\n\nRoughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split\nthe resulting 168,740 samples into a training set, validation set and testing set of\nthe following sizes:\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nIt is quite cumbersome to extract the Danish portion of the WIT-Base dataset,\nespecially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT\nis purely to make it easier to work with the Danish portion of it.",
"### Source Data\n\n\nThe original data was collected from WikiMedia's\nWIT-Base dataset, which in turn\ncomes from Google's WIT dataset.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDan Saattrup Nielsen from the The Alexandra\nInstitute curated this dataset.",
"### Licensing Information\n\n\nThe dataset is licensed under the CC BY-SA 4.0\nlicense."
] |
d6339da797fc00d558d0b2c0354235a8ccf6b66e |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | taejunkim/djmix | [
"region:us"
] | 2022-11-16T13:28:37+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": [], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "The DJ Mix Dataset", "tags": []} | 2023-07-29T01:55:37+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
1abb5e627925e8a6689c0aa1c44c59fbac7953dd | # Dataset Card for "processed_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | taejunkim/processed_demo | [
"region:us"
] | 2022-11-16T14:22:14+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "package_name", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "star", "dtype": "int64"}, {"name": "version_id", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 956, "num_examples": 5}, {"name": "train", "num_bytes": 1508, "num_examples": 5}], "download_size": 7783, "dataset_size": 2464}} | 2022-11-16T14:22:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "processed_demo"
More Information needed | [
"# Dataset Card for \"processed_demo\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_demo\"\n\nMore Information needed"
] |
575b4d50337307354318a0d21bbf4a701639d539 | # Dataset Card for "binomial_3blue1brown_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juancopi81/binomial_3blue1brown_test | [
"region:us"
] | 2022-11-16T14:40:20+00:00 | {"dataset_info": {"features": [{"name": "CHANNEL_NAME", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "TITLE", "dtype": "string"}, {"name": "DESCRIPTION", "dtype": "string"}, {"name": "TRANSCRIPTION", "dtype": "string"}, {"name": "SEGMENTS", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59462, "num_examples": 2}], "download_size": 44700, "dataset_size": 59462}} | 2022-11-16T14:40:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "binomial_3blue1brown_test"
More Information needed | [
"# Dataset Card for \"binomial_3blue1brown_test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"binomial_3blue1brown_test\"\n\nMore Information needed"
] |
f599c406b0b7a26af81802dfbc9054a04be30c98 | # Dataset Card for "test_push_og"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push_og | [
"region:us"
] | 2022-11-16T14:56:03+00:00 | {"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46, "num_examples": 3}, {"name": "test", "num_bytes": 32, "num_examples": 2}], "download_size": 1674, "dataset_size": 78}} | 2022-11-16T15:04:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_push_og"
More Information needed | [
"# Dataset Card for \"test_push_og\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_push_og\"\n\nMore Information needed"
] |
f1c8c125bcc621b03c73bd5bccdd38579521c627 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068523 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-16T15:57:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-16T16:43:43+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-3b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-3b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
d42f42526b7f46be81b6e46696be4bf516d13433 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-ab6376-2120068526 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-16T15:57:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "bigscience/bloom-1b1", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-16T16:25:39+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: bigscience/bloom-1b1
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: bigscience/bloom-1b1\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.