sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2bebc3c89a3f327680c2f6ae9d62b1e86fb6b6b6 | # Dataset Card for "resume_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/resume_dataset | [
"region:us"
] | 2022-11-08T09:24:45+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 355695532, "num_examples": 161071}, {"name": "train", "num_bytes": 1421896716, "num_examples": 644282}], "download_size": 896434509, "dataset_size": 1777592248}} | 2022-11-08T09:25:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "resume_dataset"
More Information needed | [
"# Dataset Card for \"resume_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"resume_dataset\"\n\nMore Information needed"
] |
a0aedcc2333fb5e70217bf070e0ae193c2254897 | # Dataset Card for "tmp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | amir7d0/tmp | [
"region:us"
] | 2022-11-08T09:25:03+00:00 | {"dataset_info": {"features": [{"name": "SAMPLE_ID", "dtype": "int64"}, {"name": "TEXT", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "IMAGE_PATH", "dtype": "string"}, {"name": "IMAGE", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 599579428.0, "num_examples": 100000}], "download_size": 2124724355, "dataset_size": 599579428.0}} | 2022-11-09T13:28:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tmp"
More Information needed | [
"# Dataset Card for \"tmp\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tmp\"\n\nMore Information needed"
] |
c3b175a8dfdcaaf7ad64a1f0ba2939f4266948bb | # Dataset Card for "test_push4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push4 | [
"region:us"
] | 2022-11-08T09:30:18+00:00 | {"dataset_info": [{"config_name": "v1", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train"}, {"name": "test"}]}, {"config_name": "v2", "features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train"}, {"name": "test"}]}]} | 2022-11-08T09:47:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_push4"
More Information needed | [
"# Dataset Card for \"test_push4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_push4\"\n\nMore Information needed"
] |
c99d6d2f4a02dacd94f6ffd3055db5472613750e | # Dataset Card for "test_push_no_conf"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push_no_conf | [
"region:us"
] | 2022-11-08T09:53:55+00:00 | {"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120, "num_examples": 8}, {"name": "test", "num_bytes": 46, "num_examples": 3}], "download_size": 1712, "dataset_size": 166}} | 2022-11-08T09:54:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_push_no_conf"
More Information needed | [
"# Dataset Card for \"test_push_no_conf\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_push_no_conf\"\n\nMore Information needed"
] |
f0471f90290414cceb9e69cc3c16ffff338c4e9d | # Dataset Card for "tokenize_resume_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/tokenize_resume_dataset | [
"region:us"
] | 2022-11-08T09:55:43+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "test", "num_bytes": 275640050, "num_examples": 161071}, {"name": "train", "num_bytes": 1102620205, "num_examples": 644282}], "download_size": 521528169, "dataset_size": 1378260255}} | 2022-11-08T09:56:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tokenize_resume_dataset"
More Information needed | [
"# Dataset Card for \"tokenize_resume_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenize_resume_dataset\"\n\nMore Information needed"
] |
c6abcf44778df8dbf38ba6599b19ed196ea6e5ae | # Dataset Card for "lm_resume_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/lm_resume_dataset | [
"region:us"
] | 2022-11-08T10:22:14+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 714031412, "num_examples": 107083}, {"name": "train", "num_bytes": 2856345596, "num_examples": 428365}], "download_size": 1035174948, "dataset_size": 3570377008}} | 2022-11-08T10:23:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lm_resume_dataset"
More Information needed | [
"# Dataset Card for \"lm_resume_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lm_resume_dataset\"\n\nMore Information needed"
] |
8616749880709e4f10ab40bcad2fc62e33caed34 | All images taken from https://github.com/InputBlackBoxOutput/logo-images-dataset | superchthonic/logos-dataset | [
"region:us"
] | 2022-11-08T10:41:41+00:00 | {} | 2022-11-08T10:42:10+00:00 | [] | [] | TAGS
#region-us
| All images taken from URL | [] | [
"TAGS\n#region-us \n"
] |
1fa6a3831dae1addb2e2f712bbf13edcd94b274a | # Dataset Card for "test_push_two_confs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push_two_confs | [
"region:us"
] | 2022-11-08T11:39:59+00:00 | {"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120, "num_examples": 8}, {"name": "test", "num_bytes": 46, "num_examples": 3}], "download_size": 1712, "dataset_size": 166}} | 2022-11-08T11:40:48+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_push_two_confs"
More Information needed | [
"# Dataset Card for \"test_push_two_confs\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_push_two_confs\"\n\nMore Information needed"
] |
1f8d799c0974a1eec9499eb68a6a4c1092d4477d | # Dataset Card for "vira-intents-live"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ibm/vira-intents-live | [
"region:us"
] | 2022-11-08T12:34:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 536982, "num_examples": 7434}, {"name": "validation", "num_bytes": 227106, "num_examples": 3140}], "download_size": 348220, "dataset_size": 764088}} | 2022-11-22T15:12:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vira-intents-live"
More Information needed | [
"# Dataset Card for \"vira-intents-live\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vira-intents-live\"\n\nMore Information needed"
] |
667f41421b215542d57fb403481f6dab10c0759f | # Dataset Card for "AJ_sentence"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Ayush2609/AJ_sentence | [
"region:us"
] | 2022-11-08T13:42:24+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 249843.62830074583, "num_examples": 4464}, {"name": "validation", "num_bytes": 27816.37169925418, "num_examples": 497}], "download_size": 179173, "dataset_size": 277660.0}} | 2022-11-08T14:58:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "AJ_sentence"
More Information needed | [
"# Dataset Card for \"AJ_sentence\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"AJ_sentence\"\n\nMore Information needed"
] |
f41edc00905904578c4be9dd48c81da5b159ea05 | # Dataset Card for "artificial-unbalanced-500Kb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | PGT/artificial-unbalanced-500K | [
"region:us"
] | 2022-11-08T14:11:11+00:00 | {"dataset_info": {"features": [{"name": "edge_index", "sequence": {"sequence": "int64"}}, {"name": "y", "sequence": "int64"}, {"name": "num_nodes", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2712963616, "num_examples": 499986}], "download_size": 398809184, "dataset_size": 2712963616}} | 2022-11-08T14:16:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "artificial-unbalanced-500Kb"
More Information needed | [
"# Dataset Card for \"artificial-unbalanced-500Kb\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"artificial-unbalanced-500Kb\"\n\nMore Information needed"
] |
869802e52b4dfa074d8a8e255ce85580711cdc25 |
# Dataset Card for [Stackoverflow Post Questions]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Contributions](#contributions)
## Dataset Description
Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process
is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On
the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are
usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming
questions.
### Dataset Summary
The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.
### Languages
English
## Dataset Structure
title: string,
body: string,
label: int
### Data Splits
The split is 40/40/20, where classes have been balaned to be around the same size.
## Dataset Creation
The data set was extracted and labeled with the following query in BigQuery:
```
SELECT
title,
body,
CASE
WHEN score >= 100 OR favorite_count >= 100 OR view_count >= 10000 THEN 0
WHEN score >= 25 OR favorite_count >= 25 OR view_count >= 2500 THEN 1
WHEN score >= 10 OR favorite_count >= 10 OR view_count >= 1000 THEN 2
ELSE 3
END AS label
FROM `bigquery-public-data`.stackoverflow.posts_questions
```
### Source Data
The data was extracted from the Big Query public dataset: `bigquery-public-data.stackoverflow.posts_questions`
#### Initial Data Collection and Normalization
The original dataset contained high class imbalance:
label count
0 977424
1 2401534
2 3418179
3 16222990
Grand Total 23020127
The data was sampled from each class to have around the same amount of records on every class.
### Contributions
Thanks to [@pacofvf](https://github.com/pacofvf) for adding this dataset.
| pacovaldez/stackoverflow-questions | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"stackoverflow",
"technical questions",
"region:us"
] | 2022-11-09T01:16:19+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "stackoverflow_post_questions", "tags": ["stackoverflow", "technical questions"]} | 2022-11-10T00:14:37+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #stackoverflow #technical questions #region-us
|
# Dataset Card for [Stackoverflow Post Questions]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Splits
- Dataset Creation
- Source Data
- Contributions
## Dataset Description
Companies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process
is the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On
the other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are
usually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming
questions.
### Dataset Summary
The dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.
### Languages
English
## Dataset Structure
title: string,
body: string,
label: int
### Data Splits
The split is 40/40/20, where classes have been balaned to be around the same size.
## Dataset Creation
The data set was extracted and labeled with the following query in BigQuery:
### Source Data
The data was extracted from the Big Query public dataset: 'bigquery-public-data.stackoverflow.posts_questions'
#### Initial Data Collection and Normalization
The original dataset contained high class imbalance:
label count
0 977424
1 2401534
2 3418179
3 16222990
Grand Total 23020127
The data was sampled from each class to have around the same amount of records on every class.
### Contributions
Thanks to @pacofvf for adding this dataset.
| [
"# Dataset Card for [Stackoverflow Post Questions]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Contributions",
"## Dataset Description\n\nCompanies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process \nis the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On \nthe other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are \nusually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming \nquestions.",
"### Dataset Summary\n\nThe dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.",
"### Languages\n\nEnglish",
"## Dataset Structure\n\ntitle: string,\nbody: string,\nlabel: int",
"### Data Splits\n\nThe split is 40/40/20, where classes have been balaned to be around the same size.",
"## Dataset Creation\n\nThe data set was extracted and labeled with the following query in BigQuery:",
"### Source Data\n\nThe data was extracted from the Big Query public dataset: 'bigquery-public-data.stackoverflow.posts_questions'",
"#### Initial Data Collection and Normalization\n\nThe original dataset contained high class imbalance:\n\nlabel\tcount\n0\t977424\n1\t2401534\n2\t3418179\n3\t16222990\nGrand Total\t23020127\n\nThe data was sampled from each class to have around the same amount of records on every class.",
"### Contributions\n\nThanks to @pacofvf for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #stackoverflow #technical questions #region-us \n",
"# Dataset Card for [Stackoverflow Post Questions]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Contributions",
"## Dataset Description\n\nCompanies that sell Open-source software tools usually hire an army of Customer representatives to try to answer every question asked about their tool. The first step in this process \nis the prioritization of the question. The classification scale usually consists of 4 values, P0, P1, P2, and P3, with different meanings across every participant in the industry. On \nthe other hand, every software developer in the world has dealt with Stack Overflow (SO); the amount of shared knowledge there is incomparable to any other website. Questions in SO are \nusually annotated and curated by thousands of people, providing metadata about the quality of the question. This dataset aims to provide an accurate prioritization for programming \nquestions.",
"### Dataset Summary\n\nThe dataset contains the title and body of stackoverflow questions and a label value(0,1,2,3) that was calculated using thresholds defined by SO badges.",
"### Languages\n\nEnglish",
"## Dataset Structure\n\ntitle: string,\nbody: string,\nlabel: int",
"### Data Splits\n\nThe split is 40/40/20, where classes have been balaned to be around the same size.",
"## Dataset Creation\n\nThe data set was extracted and labeled with the following query in BigQuery:",
"### Source Data\n\nThe data was extracted from the Big Query public dataset: 'bigquery-public-data.stackoverflow.posts_questions'",
"#### Initial Data Collection and Normalization\n\nThe original dataset contained high class imbalance:\n\nlabel\tcount\n0\t977424\n1\t2401534\n2\t3418179\n3\t16222990\nGrand Total\t23020127\n\nThe data was sampled from each class to have around the same amount of records on every class.",
"### Contributions\n\nThanks to @pacofvf for adding this dataset."
] |
f42882dca80f8604ea1ee720b24e45079d610a47 | # Dataset Card for "dataset_from_synthea_for_NER_with_train_val_test_splits"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jage/dataset_from_synthea_for_NER_with_train_val_test_splits | [
"region:us"
] | 2022-11-09T02:20:42+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-DATE", "2": "I-DATE", "3": "B-NAME", "4": "I-NAME", "5": "B-AGE", "6": "I-AGE"}}}}, {"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 6614328, "num_examples": 19176}, {"name": "train", "num_bytes": 32139432.0, "num_examples": 92300}, {"name": "val", "num_bytes": 13463574.0, "num_examples": 38138}], "download_size": 4703482, "dataset_size": 52217334.0}} | 2022-11-09T02:21:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dataset_from_synthea_for_NER_with_train_val_test_splits"
More Information needed | [
"# Dataset Card for \"dataset_from_synthea_for_NER_with_train_val_test_splits\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset_from_synthea_for_NER_with_train_val_test_splits\"\n\nMore Information needed"
] |
4bf5b5ed178e0e8052b3ec7ea5f7d745ad63cb3b | # AutoTrain Dataset for project: led-samsum-dialogsum
## Dataset Description
This dataset has been automatically processed by AutoTrain for project led-samsum-dialogsum.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0": 0,
"feat_id": 0,
"text": "Amanda: I baked cookies. Do you want some?\nJerry: Sure!\nAmanda: I'll bring you tomorrow :-)",
"target": "Amanda baked cookies and will bring Jerry some tomorrow."
},
{
"feat_Unnamed: 0": 1,
"feat_id": 1,
"text": "Olivia: Who are you voting for in this election? \nOliver: Liberals as always.\nOlivia: Me too!!\nOliver: Great",
"target": "Olivia and Olivier are voting for liberals in this election. "
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"feat_id": "Value(dtype='int64', id=None)",
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 27191 |
| valid | 1318 |
| skashyap96/autotrain-data-led-samsum-dialogsum | [
"region:us"
] | 2022-11-09T04:39:14+00:00 | {"task_categories": ["conditional-text-generation"]} | 2022-11-09T08:45:51+00:00 | [] | [] | TAGS
#region-us
| AutoTrain Dataset for project: led-samsum-dialogsum
===================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project led-samsum-dialogsum.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
a7d7dedccabae5165972e24bcbd4ef50723db0d7 | # Dataset Card for "resume_dataset_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/resume_dataset_train | [
"region:us"
] | 2022-11-09T07:20:00+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2856338396, "num_examples": 428365}], "download_size": 828086360, "dataset_size": 2856338396}} | 2022-11-09T07:20:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "resume_dataset_train"
More Information needed | [
"# Dataset Card for \"resume_dataset_train\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"resume_dataset_train\"\n\nMore Information needed"
] |
2d9cb87dc7d013ac635c85ce578fcb53d526a9b5 | # Dataset Card for "resume_dataset_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nma/resume_dataset_test | [
"region:us"
] | 2022-11-09T07:20:48+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "test", "num_bytes": 714029588, "num_examples": 107083}], "download_size": 207066918, "dataset_size": 714029588}} | 2022-11-09T07:21:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "resume_dataset_test"
More Information needed | [
"# Dataset Card for \"resume_dataset_test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"resume_dataset_test\"\n\nMore Information needed"
] |
3fbbcbdb0f6ead4b2933547ceea3729e2dc463c2 |
# Dataset Card for [Dataset Name]
## Table of Contents
[Table of Contents](#table-of-contents)
[Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | Sotaro0124/Ainu-Japan_translation_model | [
"region:us"
] | 2022-11-09T08:03:09+00:00 | {} | 2022-11-09T08:11:39+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
Table of Contents
Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n Table of Contents\n Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n Table of Contents\n Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
47d0385d3210b59938b3a7cca665abab29eccff4 | Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
```
x_res = 1024
y_res = 1024
sample_rate = 44100
n_fft = 2048
hop_length = 512
``` | teticio/audio-diffusion-1024 | [
"task_categories:image-to-image",
"size_categories:10K<n<100K",
"audio",
"spectrograms",
"region:us"
] | 2022-11-09T09:22:02+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-to-image"], "task_ids": [], "pretty_name": "Mel spectrograms of music", "tags": ["audio", "spectrograms"]} | 2022-11-09T10:49:29+00:00 | [] | [] | TAGS
#task_categories-image-to-image #size_categories-10K<n<100K #audio #spectrograms #region-us
| Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in URL along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
| [] | [
"TAGS\n#task_categories-image-to-image #size_categories-10K<n<100K #audio #spectrograms #region-us \n"
] |
5c4e8f1aec1d0567864e8d7fd0c13f47084aaa09 | # Dataset Card for "zhou_ebola_human"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | wesleywt/zhou_ebola_human | [
"region:us"
] | 2022-11-09T09:22:23+00:00 | {"dataset_info": {"features": [{"name": "is_interaction", "dtype": "int64"}, {"name": "protein_1.id", "dtype": "string"}, {"name": "protein_1.primary", "dtype": "string"}, {"name": "protein_2.id", "dtype": "string"}, {"name": "protein_2.primary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 275414, "num_examples": 300}, {"name": "train", "num_bytes": 29425605, "num_examples": 22682}], "download_size": 6430757, "dataset_size": 29701019}} | 2022-11-09T09:22:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "zhou_ebola_human"
More Information needed | [
"# Dataset Card for \"zhou_ebola_human\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"zhou_ebola_human\"\n\nMore Information needed"
] |
225c714c5b77688cad4b649c7c3fcccafcb4ecf7 | # Dataset Card for "zhou_h1n1_human"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | wesleywt/zhou_h1n1_human | [
"region:us"
] | 2022-11-09T09:36:31+00:00 | {"dataset_info": {"features": [{"name": "is_interaction", "dtype": "int64"}, {"name": "protein_1.id", "dtype": "string"}, {"name": "protein_1.primary", "dtype": "string"}, {"name": "protein_2.id", "dtype": "string"}, {"name": "protein_2.primary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 723379, "num_examples": 762}, {"name": "train", "num_bytes": 28170698, "num_examples": 21716}], "download_size": 12309236, "dataset_size": 28894077}} | 2022-11-09T09:37:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "zhou_h1n1_human"
More Information needed | [
"# Dataset Card for \"zhou_h1n1_human\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"zhou_h1n1_human\"\n\nMore Information needed"
] |
73bb31ac9151c2afe2dbcf1165d916927f78b0c8 | # Dataset Card for "williams_mtb_hpidb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | wesleywt/williams_mtb_hpidb | [
"region:us"
] | 2022-11-09T09:49:32+00:00 | {"dataset_info": {"features": [{"name": "is_interaction", "dtype": "int64"}, {"name": "protein_1.id", "dtype": "string"}, {"name": "protein_1.primary", "dtype": "string"}, {"name": "protein_2.id", "dtype": "string"}, {"name": "protein_2.primary", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 5138954, "num_examples": 4192}, {"name": "train", "num_bytes": 19964860, "num_examples": 16768}], "download_size": 16427398, "dataset_size": 25103814}} | 2022-11-09T09:50:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "williams_mtb_hpidb"
More Information needed | [
"# Dataset Card for \"williams_mtb_hpidb\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"williams_mtb_hpidb\"\n\nMore Information needed"
] |
367e0114c039c5259108e5cf72048e0d46bf861e |
# Dataset Card for "bill_summary_us"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [BillML](https://github.com/dreamproit/BillML)
- **Repository:** [BillML](https://github.com/dreamproit/BillML)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Dataset for summarization of summarization of US Congressional bills (bill_summary_us).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
English
## Dataset Structure
### Data Instances
#### default
### Data Fields
- id: id of the bill in format(congress number + bill type + bill number + bill version).
- congress: number of the congress.
- bill_type: type of the bill.
- bill_number: number of the bill.
- bill_version: version of the bill.
- sections: list of bill sections with section_id, text and header.
- sections_length: number with lenght of the sections list.
- text: bill text.
- text_length: number of characters in the text.
- summary: summary of the bill.
- summary_length: number of characters in the summary.
- title: official title of the bill.
### Data Splits
train
## Dataset Creation
### Curation Rationale
Bills (proposed laws) are specialized, structured documents with great public significance. Often, the language of a bill may not directly explain the potential impact of the legislation. For bills in the U.S. Congress, the Congressional Research Service of the Library of Congress provides professional, non-partisan summaries of bills. These are valuable for public understanding of the bills and are serve as an essential part of the lawmaking process to understand the meaning and potential legislative impact.
This dataset collects the text of bills, some metadata, as well as the CRS summaries. In order to build more accurate ML models for bill summarization it is important to have a clean dataset, alongside the professionally-written CRS summaries. ML summarization models built on generic data are bound to produce less accurate results (sometimes creating summaries that describe the opposite of a bill's actual effect). In addition, models that attempt to summarize all bills (some of which may reach 4000 pages long) may also be inaccurate due to the current limitations of summarization on long texts.
As a result, this dataset collects bill and summary information; it provides text as a list of sections with the text and header. This could be used to create a summary of sections and then a summary of summaries.
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
[govinfo.gov](https://www.govinfo.gov/)
#### Initial Data Collection and Normalization
The data consists of the US congress bills that were collected from the [govinfo.gov](https://www.govinfo.gov/) service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license.
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[dreamproit.com](https://dreamproit.com/)
### Licensing Information
Bill and summary information are public and are unlicensed, as it is data produced by government entities. The collection and enhancement work that we provide for this dataset, to the degree it may be covered by copyright, is released under [CC0](https://creativecommons.org/share-your-work/public-domain/cc0/).
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@aih](https://github.com/aih) [@BorodaUA](https://github.com/BorodaUA), [@alexbojko](https://github.com/alexbojko) for adding this dataset. | dreamproit/bill_summary_us | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"bills",
"legal",
"region:us"
] | 2022-11-09T10:13:33+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "bill_summary_us", "tags": ["bills", "legal"], "configs": [{"config_name": "default"}]} | 2023-10-17T03:16:57+00:00 | [] | [
"en"
] | TAGS
#task_categories-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #bills #legal #region-us
|
# Dataset Card for "bill_summary_us"
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: BillML
- Repository: BillML
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Dataset for summarization of summarization of US Congressional bills (bill_summary_us).
### Supported Tasks and Leaderboards
### Languages
English
## Dataset Structure
### Data Instances
#### default
### Data Fields
- id: id of the bill in format(congress number + bill type + bill number + bill version).
- congress: number of the congress.
- bill_type: type of the bill.
- bill_number: number of the bill.
- bill_version: version of the bill.
- sections: list of bill sections with section_id, text and header.
- sections_length: number with lenght of the sections list.
- text: bill text.
- text_length: number of characters in the text.
- summary: summary of the bill.
- summary_length: number of characters in the summary.
- title: official title of the bill.
### Data Splits
train
## Dataset Creation
### Curation Rationale
Bills (proposed laws) are specialized, structured documents with great public significance. Often, the language of a bill may not directly explain the potential impact of the legislation. For bills in the U.S. Congress, the Congressional Research Service of the Library of Congress provides professional, non-partisan summaries of bills. These are valuable for public understanding of the bills and are serve as an essential part of the lawmaking process to understand the meaning and potential legislative impact.
This dataset collects the text of bills, some metadata, as well as the CRS summaries. In order to build more accurate ML models for bill summarization it is important to have a clean dataset, alongside the professionally-written CRS summaries. ML summarization models built on generic data are bound to produce less accurate results (sometimes creating summaries that describe the opposite of a bill's actual effect). In addition, models that attempt to summarize all bills (some of which may reach 4000 pages long) may also be inaccurate due to the current limitations of summarization on long texts.
As a result, this dataset collects bill and summary information; it provides text as a list of sections with the text and header. This could be used to create a summary of sections and then a summary of summaries.
### Source Data
URL
#### Initial Data Collection and Normalization
The data consists of the US congress bills that were collected from the URL service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license.
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
URL
### Licensing Information
Bill and summary information are public and are unlicensed, as it is data produced by government entities. The collection and enhancement work that we provide for this dataset, to the degree it may be covered by copyright, is released under CC0.
### Contributions
Thanks to @aih @BorodaUA, @alexbojko for adding this dataset. | [
"# Dataset Card for \"bill_summary_us\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: BillML\n- Repository: BillML\n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nDataset for summarization of summarization of US Congressional bills (bill_summary_us).",
"### Supported Tasks and Leaderboards",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"#### default",
"### Data Fields\n\n- id: id of the bill in format(congress number + bill type + bill number + bill version).\n- congress: number of the congress.\n- bill_type: type of the bill.\n- bill_number: number of the bill.\n- bill_version: version of the bill.\n- sections: list of bill sections with section_id, text and header.\n- sections_length: number with lenght of the sections list.\n- text: bill text.\n- text_length: number of characters in the text.\n- summary: summary of the bill.\n- summary_length: number of characters in the summary.\n- title: official title of the bill.",
"### Data Splits\n\ntrain",
"## Dataset Creation",
"### Curation Rationale\n\nBills (proposed laws) are specialized, structured documents with great public significance. Often, the language of a bill may not directly explain the potential impact of the legislation. For bills in the U.S. Congress, the Congressional Research Service of the Library of Congress provides professional, non-partisan summaries of bills. These are valuable for public understanding of the bills and are serve as an essential part of the lawmaking process to understand the meaning and potential legislative impact.\n\nThis dataset collects the text of bills, some metadata, as well as the CRS summaries. In order to build more accurate ML models for bill summarization it is important to have a clean dataset, alongside the professionally-written CRS summaries. ML summarization models built on generic data are bound to produce less accurate results (sometimes creating summaries that describe the opposite of a bill's actual effect). In addition, models that attempt to summarize all bills (some of which may reach 4000 pages long) may also be inaccurate due to the current limitations of summarization on long texts.\n\nAs a result, this dataset collects bill and summary information; it provides text as a list of sections with the text and header. This could be used to create a summary of sections and then a summary of summaries.",
"### Source Data\n\nURL",
"#### Initial Data Collection and Normalization\n\nThe data consists of the US congress bills that were collected from the URL service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nURL",
"### Licensing Information\n\nBill and summary information are public and are unlicensed, as it is data produced by government entities. The collection and enhancement work that we provide for this dataset, to the degree it may be covered by copyright, is released under CC0.",
"### Contributions\n\nThanks to @aih @BorodaUA, @alexbojko for adding this dataset."
] | [
"TAGS\n#task_categories-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #bills #legal #region-us \n",
"# Dataset Card for \"bill_summary_us\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: BillML\n- Repository: BillML\n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nDataset for summarization of summarization of US Congressional bills (bill_summary_us).",
"### Supported Tasks and Leaderboards",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"#### default",
"### Data Fields\n\n- id: id of the bill in format(congress number + bill type + bill number + bill version).\n- congress: number of the congress.\n- bill_type: type of the bill.\n- bill_number: number of the bill.\n- bill_version: version of the bill.\n- sections: list of bill sections with section_id, text and header.\n- sections_length: number with lenght of the sections list.\n- text: bill text.\n- text_length: number of characters in the text.\n- summary: summary of the bill.\n- summary_length: number of characters in the summary.\n- title: official title of the bill.",
"### Data Splits\n\ntrain",
"## Dataset Creation",
"### Curation Rationale\n\nBills (proposed laws) are specialized, structured documents with great public significance. Often, the language of a bill may not directly explain the potential impact of the legislation. For bills in the U.S. Congress, the Congressional Research Service of the Library of Congress provides professional, non-partisan summaries of bills. These are valuable for public understanding of the bills and are serve as an essential part of the lawmaking process to understand the meaning and potential legislative impact.\n\nThis dataset collects the text of bills, some metadata, as well as the CRS summaries. In order to build more accurate ML models for bill summarization it is important to have a clean dataset, alongside the professionally-written CRS summaries. ML summarization models built on generic data are bound to produce less accurate results (sometimes creating summaries that describe the opposite of a bill's actual effect). In addition, models that attempt to summarize all bills (some of which may reach 4000 pages long) may also be inaccurate due to the current limitations of summarization on long texts.\n\nAs a result, this dataset collects bill and summary information; it provides text as a list of sections with the text and header. This could be used to create a summary of sections and then a summary of summaries.",
"### Source Data\n\nURL",
"#### Initial Data Collection and Normalization\n\nThe data consists of the US congress bills that were collected from the URL service provided by the United States Government Publishing Office (GPO) under CC0-1.0 license.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nURL",
"### Licensing Information\n\nBill and summary information are public and are unlicensed, as it is data produced by government entities. The collection and enhancement work that we provide for this dataset, to the degree it may be covered by copyright, is released under CC0.",
"### Contributions\n\nThanks to @aih @BorodaUA, @alexbojko for adding this dataset."
] |
34b62ff3c2487b0e4a7cf74b19d636fe73b26e0c |
# Dataset Card for "saf_legal_domain_german"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This Short Answer Feedback (SAF) dataset contains 19 German questions in the domain of the German social law (with reference answers). The idea of constructing a bilingual (English and German) short answer dataset as a way to remedy the lack of content-focused feedback datasets was introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022). Please refer to [saf_micro_job_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_micro_job_german) and [saf_communication_networks_english](https://huggingface.co/datasets/Short-Answer-Feedback/saf_communication_networks_english) for similarly constructed datasets that can be used for SAF tasks.
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Ist das eine Frage?",
"reference_answer": "Ja, das ist eine Frage.",
"provided_answer": "Ich bin mir sicher, dass das eine Frage ist.",
"answer_feedback": "Korrekt.",
"verification_feedback": "Correct",
"error_class": "Keine",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = 1), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `error_class`: a `string` feature representing the type of error identified in the case of a not completely correct answer.
- `score`: a `float64` feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set from which the data came from).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1596| 400| 221| 275|
## Additional Information
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. | Short-Answer-Feedback/saf_legal_domain_german | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"short answer feedback",
"legal domain",
"region:us"
] | 2022-11-09T10:35:55+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["de"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "pretty_name": "SAF - Legal Domain - German", "tags": ["short answer feedback", "legal domain"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}, {"name": "provided_answer", "dtype": "string"}, {"name": "answer_feedback", "dtype": "string"}, {"name": "verification_feedback", "dtype": "string"}, {"name": "error_class", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 2142112, "num_examples": 1596}, {"name": "validation", "num_bytes": 550206, "num_examples": 400}, {"name": "test_unseen_answers", "num_bytes": 301087, "num_examples": 221}, {"name": "test_unseen_questions", "num_bytes": 360616, "num_examples": 275}], "download_size": 484808, "dataset_size": 3354021}} | 2023-03-31T10:47:38+00:00 | [] | [
"de"
] | TAGS
#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-German #license-cc-by-4.0 #short answer feedback #legal domain #region-us
| Dataset Card for "saf\_legal\_domain\_german"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Additional Information
+ Contributions
Dataset Description
-------------------
### Dataset Summary
This Short Answer Feedback (SAF) dataset contains 19 German questions in the domain of the German social law (with reference answers). The idea of constructing a bilingual (English and German) short answer dataset as a way to remedy the lack of content-focused feedback datasets was introduced in Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022). Please refer to saf\_micro\_job\_german and saf\_communication\_networks\_english for similarly constructed datasets that can be used for SAF tasks.
### Supported Tasks and Leaderboards
* 'short\_answer\_feedback': The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
Dataset Structure
-----------------
### Data Instances
An example of an entry of the training split looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'id': a 'string' feature (UUID4 in HEX format).
* 'question': a 'string' feature representing a question.
* 'reference\_answer': a 'string' feature representing a reference answer to the question.
* 'provided\_answer': a 'string' feature representing an answer that was provided for a particular question.
* 'answer\_feedback': a 'string' feature representing the feedback given to the provided answers.
* 'verification\_feedback': a 'string' feature representing an automatic labeling of the score. It can be 'Correct' ('score' = 1), 'Incorrect' ('score' = 0) or 'Partially correct' (all intermediate scores).
* 'error\_class': a 'string' feature representing the type of error identified in the case of a not completely correct answer.
* 'score': a 'float64' feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
* 'train': used for training, contains a set of questions and the provided answers to them.
* 'validation': used for validation, contains a set of questions and the provided answers to them (derived from the original training set from which the data came from).
* 'test\_unseen\_answers': used for testing, contains unseen answers to the questions present in the 'train' split.
* 'test\_unseen\_questions': used for testing, contains unseen questions that do not appear in the 'train' split.
Additional Information
----------------------
### Contributions
Thanks to @JohnnyBoy2103 for adding this dataset.
| [
"### Dataset Summary\n\n\nThis Short Answer Feedback (SAF) dataset contains 19 German questions in the domain of the German social law (with reference answers). The idea of constructing a bilingual (English and German) short answer dataset as a way to remedy the lack of content-focused feedback datasets was introduced in Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022). Please refer to saf\\_micro\\_job\\_german and saf\\_communication\\_networks\\_english for similarly constructed datasets that can be used for SAF tasks.",
"### Supported Tasks and Leaderboards\n\n\n* 'short\\_answer\\_feedback': The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.",
"### Languages\n\n\nThe questions, reference answers, provided answers and the answer feedback in the dataset are written in German.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of an entry of the training split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'id': a 'string' feature (UUID4 in HEX format).\n* 'question': a 'string' feature representing a question.\n* 'reference\\_answer': a 'string' feature representing a reference answer to the question.\n* 'provided\\_answer': a 'string' feature representing an answer that was provided for a particular question.\n* 'answer\\_feedback': a 'string' feature representing the feedback given to the provided answers.\n* 'verification\\_feedback': a 'string' feature representing an automatic labeling of the score. It can be 'Correct' ('score' = 1), 'Incorrect' ('score' = 0) or 'Partially correct' (all intermediate scores).\n* 'error\\_class': a 'string' feature representing the type of error identified in the case of a not completely correct answer.\n* 'score': a 'float64' feature (between 0 and 1) representing the score given to the provided answer.",
"### Data Splits\n\n\nThe dataset is comprised of four data splits.\n\n\n* 'train': used for training, contains a set of questions and the provided answers to them.\n* 'validation': used for validation, contains a set of questions and the provided answers to them (derived from the original training set from which the data came from).\n* 'test\\_unseen\\_answers': used for testing, contains unseen answers to the questions present in the 'train' split.\n* 'test\\_unseen\\_questions': used for testing, contains unseen questions that do not appear in the 'train' split.\n\n\n\nAdditional Information\n----------------------",
"### Contributions\n\n\nThanks to @JohnnyBoy2103 for adding this dataset."
] | [
"TAGS\n#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-German #license-cc-by-4.0 #short answer feedback #legal domain #region-us \n",
"### Dataset Summary\n\n\nThis Short Answer Feedback (SAF) dataset contains 19 German questions in the domain of the German social law (with reference answers). The idea of constructing a bilingual (English and German) short answer dataset as a way to remedy the lack of content-focused feedback datasets was introduced in Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022). Please refer to saf\\_micro\\_job\\_german and saf\\_communication\\_networks\\_english for similarly constructed datasets that can be used for SAF tasks.",
"### Supported Tasks and Leaderboards\n\n\n* 'short\\_answer\\_feedback': The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.",
"### Languages\n\n\nThe questions, reference answers, provided answers and the answer feedback in the dataset are written in German.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of an entry of the training split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'id': a 'string' feature (UUID4 in HEX format).\n* 'question': a 'string' feature representing a question.\n* 'reference\\_answer': a 'string' feature representing a reference answer to the question.\n* 'provided\\_answer': a 'string' feature representing an answer that was provided for a particular question.\n* 'answer\\_feedback': a 'string' feature representing the feedback given to the provided answers.\n* 'verification\\_feedback': a 'string' feature representing an automatic labeling of the score. It can be 'Correct' ('score' = 1), 'Incorrect' ('score' = 0) or 'Partially correct' (all intermediate scores).\n* 'error\\_class': a 'string' feature representing the type of error identified in the case of a not completely correct answer.\n* 'score': a 'float64' feature (between 0 and 1) representing the score given to the provided answer.",
"### Data Splits\n\n\nThe dataset is comprised of four data splits.\n\n\n* 'train': used for training, contains a set of questions and the provided answers to them.\n* 'validation': used for validation, contains a set of questions and the provided answers to them (derived from the original training set from which the data came from).\n* 'test\\_unseen\\_answers': used for testing, contains unseen answers to the questions present in the 'train' split.\n* 'test\\_unseen\\_questions': used for testing, contains unseen questions that do not appear in the 'train' split.\n\n\n\nAdditional Information\n----------------------",
"### Contributions\n\n\nThanks to @JohnnyBoy2103 for adding this dataset."
] |
98f2b57b8be4e53c21ae981fd42495055004294b | This dataset is based on the "cumulative" configuration of the MultiWoz 2.2 dataset available also on the [HuggingFace Hub](https://huggingface.co/datasets/multi_woz_v22).
Therefore, the system and user utterances, the active intents, and the services are exactly the same.
In addition to the data present in version 2.2, this dataset contains, for each dialogue turn, the annotations from versions 2.1, 2.3, and 2.4.
NOTE:
- Each dialogue turn is composed of a system utterance and a user utterance, in this exact order
- The initial system utterance is filled in with the `none` string
- In the last dialogue turn is always the system that greets the user; this last turn is kept and the user utterance is filled in with the `none` string (usually during evaluation this dialogue turn is not considered)
- To be able to save data as an arrow file you need to "pad" the states to all have the same keys. To do this the None value is introduced. Therefore, when you load it back it is convenient to have a way to remove the "padding". In order to do so, a function like the following can help
```python
def remove_empty_slots(state: Union[Dict[str, Union[List[str], None]], None]) -> Union[Dict[str, List[str]], None]:
if state is None:
return None
return {k: v for k, v in state.items() if v is not None}
```
- The schema has been updated to make all the versions compatible. Basically, the "book" string has been removed from slots in v2.2. The updated schema is the following
```yaml
attraction-area
attraction-name
attraction-type
hotel-area
hotel-day
hotel-internet
hotel-name
hotel-parking
hotel-people
hotel-pricerange
hotel-stars
hotel-stay
hotel-type
restaurant-area
restaurant-day
restaurant-food
restaurant-name
restaurant-people
restaurant-pricerange
restaurant-time
taxi-arriveby
taxi-departure
taxi-destination
taxi-leaveat
train-arriveby
train-day
train-departure
train-destination
train-leaveat
train-people
``` | pietrolesci/multiwoz_all_versions | [
"region:us"
] | 2022-11-09T10:51:56+00:00 | {} | 2022-11-10T11:50:53+00:00 | [] | [] | TAGS
#region-us
| This dataset is based on the "cumulative" configuration of the MultiWoz 2.2 dataset available also on the HuggingFace Hub.
Therefore, the system and user utterances, the active intents, and the services are exactly the same.
In addition to the data present in version 2.2, this dataset contains, for each dialogue turn, the annotations from versions 2.1, 2.3, and 2.4.
NOTE:
- Each dialogue turn is composed of a system utterance and a user utterance, in this exact order
- The initial system utterance is filled in with the 'none' string
- In the last dialogue turn is always the system that greets the user; this last turn is kept and the user utterance is filled in with the 'none' string (usually during evaluation this dialogue turn is not considered)
- To be able to save data as an arrow file you need to "pad" the states to all have the same keys. To do this the None value is introduced. Therefore, when you load it back it is convenient to have a way to remove the "padding". In order to do so, a function like the following can help
- The schema has been updated to make all the versions compatible. Basically, the "book" string has been removed from slots in v2.2. The updated schema is the following
| [] | [
"TAGS\n#region-us \n"
] |
c9d83173de7024e112c2d0c815fb0c2b1301dc1e | # Dataset Card for "multi-label-classification-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | andreotte/multi-label-classification-test | [
"region:us"
] | 2022-11-09T12:42:43+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "Door", "1": "Eaves", "2": "Gutter", "3": "Vegetation", "4": "Vent", "5": "Window"}}}}, {"name": "pixel_values", "dtype": "image"}], "splits": [{"name": "test", "num_bytes": 9476052.0, "num_examples": 151}, {"name": "train", "num_bytes": 82422534.7, "num_examples": 1315}], "download_size": 91894615, "dataset_size": 91898586.7}} | 2022-11-09T12:42:54+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "multi-label-classification-test"
More Information needed | [
"# Dataset Card for \"multi-label-classification-test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"multi-label-classification-test\"\n\nMore Information needed"
] |
75b569b006880d60ccd260a7f9492309f2bd7e5e | # Dataset Card for "dummy_data_clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | loubnabnl/dummy_data_clean | [
"region:us"
] | 2022-11-09T17:05:20+00:00 | {"dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "annotation_id", "dtype": "string"}, {"name": "pii", "dtype": "string"}, {"name": "pii_modified", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3808098.717948718, "num_examples": 400}], "download_size": 1311649, "dataset_size": 3808098.717948718}} | 2022-11-09T17:05:43+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dummy_data_clean"
More Information needed | [
"# Dataset Card for \"dummy_data_clean\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dummy_data_clean\"\n\nMore Information needed"
] |
b5742c509417def7094c043d94a9c311b1d63b8e | My photos to train AI | rafaelmotac/rafaelcorreia | [
"region:us"
] | 2022-11-09T17:53:00+00:00 | {} | 2022-11-09T22:39:48+00:00 | [] | [] | TAGS
#region-us
| My photos to train AI | [] | [
"TAGS\n#region-us \n"
] |
3c62f26bafdc4c4e1c16401ad4b32f0a94b46612 | # Dataset Card for "swerec-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ScandEval/swerec-mini | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:sv",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-11-09T18:15:56+00:00 | {"language": ["sv"], "license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 713970, "num_examples": 2048}, {"name": "train", "num_bytes": 355633, "num_examples": 1024}, {"name": "val", "num_bytes": 82442, "num_examples": 256}], "download_size": 684710, "dataset_size": 1152045}} | 2023-07-05T08:46:49+00:00 | [] | [
"sv"
] | TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-Swedish #license-cc-by-nc-4.0 #region-us
| # Dataset Card for "swerec-mini"
More Information needed | [
"# Dataset Card for \"swerec-mini\"\n\nMore Information needed"
] | [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-Swedish #license-cc-by-nc-4.0 #region-us \n",
"# Dataset Card for \"swerec-mini\"\n\nMore Information needed"
] |
0172a82241343327a319f1afa42957039e6ab9b4 | # Dataset Card for "indian_food_images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | muhammadbilal5110/indian_food_images | [
"region:us"
] | 2022-11-09T18:19:20+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "burger", "1": "butter_naan", "2": "chai", "3": "chapati", "4": "chole_bhature", "5": "dal_makhani", "6": "dhokla", "7": "fried_rice", "8": "idli", "9": "jalebi", "10": "kaathi_rolls", "11": "kadai_paneer", "12": "kulfi", "13": "masala_dosa", "14": "momos", "15": "paani_puri", "16": "pakode", "17": "pav_bhaji", "18": "pizza", "19": "samosa"}}}}], "splits": [{"name": "test", "num_bytes": -50510587.406603925, "num_examples": 941}, {"name": "train", "num_bytes": -283960930.24139607, "num_examples": 5328}], "download_size": 1600880763, "dataset_size": -334471517.648}} | 2022-11-09T18:20:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "indian_food_images"
More Information needed | [
"# Dataset Card for \"indian_food_images\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"indian_food_images\"\n\nMore Information needed"
] |
bac3f20df77a27858495b76880121c1e9531d9c7 |
# Dataset Card for "lmqg/qa_harvesting_from_wikipedia_pseudo"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over [`lmqg/qa_harvesting_from_wikipedia`](https://huggingface.co/datasets/lmqg/qa_harvesting_from_wikipedia), 1 million paragraph and answer pairs collected in [Du and Cardie, 2018](https://aclanthology.org/P18-1177/), made for question-answering based evaluation (QAE) for question generation model proposed by [Zhang and Bansal, 2019](https://aclanthology.org/D19-1253/).
The `train` split is the synthetic data and the `validation` split is the original validation set of [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/), where the model should be evaluate on.
This contains synthetic QA datasets created with the following QG models:
- [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad)
- [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad)
- [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad)
- [lmqg/t5-base-squad](https://huggingface.co/lmqg/t5-base-squad)
- [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad)
See more detail about the QAE at [https://github.com/asahi417/lm-question-generation/tree/master/misc/qa_based_evaluation](https://github.com/asahi417/lm-question-generation/tree/master/misc/emnlp_2022/qa_based_evaluation).
### Supported Tasks and Leaderboards
* `question-answering`
### Languages
English (en)
## Dataset Structure
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature of id
- `title`: a `string` feature of title of the paragraph
- `context`: a `string` feature of paragraph
- `question`: a `string` feature of question
- `answers`: a `json` feature of answers
### Data Splits
|train |validation|
|--------:|---------:|
|1,092,142| 10,570 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qa_harvesting_from_wikipedia_pseudo | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|wikipedia",
"language:en",
"license:cc-by-4.0",
"arxiv:2210.03992",
"region:us"
] | 2022-11-09T19:05:38+00:00 | {"language": "en", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": "10K<n<100K", "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Synthetic QA dataset."} | 2022-11-10T11:30:06+00:00 | [
"2210.03992"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|wikipedia #language-English #license-cc-by-4.0 #arxiv-2210.03992 #region-us
| Dataset Card for "lmqg/qa\_harvesting\_from\_wikipedia\_pseudo"
===============================================================
Dataset Description
-------------------
* Repository: URL
* Paper: URL
* Point of Contact: Asahi Ushio
### Dataset Summary
This is a synthetic QA dataset generated with fine-tuned QG models over 'lmqg/qa\_harvesting\_from\_wikipedia', 1 million paragraph and answer pairs collected in Du and Cardie, 2018, made for question-answering based evaluation (QAE) for question generation model proposed by Zhang and Bansal, 2019.
The 'train' split is the synthetic data and the 'validation' split is the original validation set of SQuAD, where the model should be evaluate on.
This contains synthetic QA datasets created with the following QG models:
* lmqg/bart-base-squad
* lmqg/bart-large-squad
* lmqg/t5-small-squad
* lmqg/t5-base-squad
* lmqg/t5-large-squad
See more detail about the QAE at URL
### Supported Tasks and Leaderboards
* 'question-answering'
### Languages
English (en)
Dataset Structure
-----------------
### Data Fields
The data fields are the same among all splits.
#### plain\_text
* 'id': a 'string' feature of id
* 'title': a 'string' feature of title of the paragraph
* 'context': a 'string' feature of paragraph
* 'question': a 'string' feature of question
* 'answers': a 'json' feature of answers
### Data Splits
| [
"### Dataset Summary\n\n\nThis is a synthetic QA dataset generated with fine-tuned QG models over 'lmqg/qa\\_harvesting\\_from\\_wikipedia', 1 million paragraph and answer pairs collected in Du and Cardie, 2018, made for question-answering based evaluation (QAE) for question generation model proposed by Zhang and Bansal, 2019.\nThe 'train' split is the synthetic data and the 'validation' split is the original validation set of SQuAD, where the model should be evaluate on.\n\n\nThis contains synthetic QA datasets created with the following QG models:\n\n\n* lmqg/bart-base-squad\n* lmqg/bart-large-squad\n* lmqg/t5-small-squad\n* lmqg/t5-base-squad\n* lmqg/t5-large-squad\n\n\nSee more detail about the QAE at URL",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answering'",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### plain\\_text\n\n\n* 'id': a 'string' feature of id\n* 'title': a 'string' feature of title of the paragraph\n* 'context': a 'string' feature of paragraph\n* 'question': a 'string' feature of question\n* 'answers': a 'json' feature of answers",
"### Data Splits"
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|wikipedia #language-English #license-cc-by-4.0 #arxiv-2210.03992 #region-us \n",
"### Dataset Summary\n\n\nThis is a synthetic QA dataset generated with fine-tuned QG models over 'lmqg/qa\\_harvesting\\_from\\_wikipedia', 1 million paragraph and answer pairs collected in Du and Cardie, 2018, made for question-answering based evaluation (QAE) for question generation model proposed by Zhang and Bansal, 2019.\nThe 'train' split is the synthetic data and the 'validation' split is the original validation set of SQuAD, where the model should be evaluate on.\n\n\nThis contains synthetic QA datasets created with the following QG models:\n\n\n* lmqg/bart-base-squad\n* lmqg/bart-large-squad\n* lmqg/t5-small-squad\n* lmqg/t5-base-squad\n* lmqg/t5-large-squad\n\n\nSee more detail about the QAE at URL",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answering'",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### plain\\_text\n\n\n* 'id': a 'string' feature of id\n* 'title': a 'string' feature of title of the paragraph\n* 'context': a 'string' feature of paragraph\n* 'question': a 'string' feature of question\n* 'answers': a 'json' feature of answers",
"### Data Splits"
] |
e431cd6f537d0c97e854ed2137f4f996d49af5c5 | More information comming soon. | dreamproit/bill_summary | [
"region:us"
] | 2022-11-09T20:03:45+00:00 | {} | 2022-11-10T08:18:27+00:00 | [] | [] | TAGS
#region-us
| More information comming soon. | [] | [
"TAGS\n#region-us \n"
] |
5eb17d96da67cef7250294e82b6a55ea81dcd5d6 | More information comming soon. | dreamproit/bill_summary_ua | [
"region:us"
] | 2022-11-09T20:04:02+00:00 | {} | 2022-11-10T08:18:05+00:00 | [] | [] | TAGS
#region-us
| More information comming soon. | [] | [
"TAGS\n#region-us \n"
] |
88bec913d85b5e2b31dae8730a980a246098c45f |
# text2image multi-prompt(s): a dataset collection
- collection of several text2image prompt datasets
- data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)
## updates
- Oct 2023: the `default` config has been updated with better deduplication. It was deduplicated with minhash (_params: n-gram size set to 3, deduplication threshold at 0.6, hash function chosen as xxh3 with 32-bit hash bits, and 128 permutations with a batch size of 10,000._) which drops 2+ million rows.
- original version is still available under `config_name="original"`
## contents
default:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 1677221
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 292876
})
})
```
For `original` config:
```
DatasetDict({
train: Dataset({
features: ['text', 'src_dataset'],
num_rows: 3551734
})
test: Dataset({
features: ['text', 'src_dataset'],
num_rows: 399393
})
})
```
_NOTE: as the other two datasets did not have a `validation` split, the validation split of `succinctly/midjourney-prompts` was merged into `train`._ | pszemraj/text2image-multi-prompt | [
"task_categories:text-generation",
"task_categories:feature-extraction",
"multilinguality:monolingual",
"source_datasets:bartman081523/stable-diffusion-discord-prompts",
"source_datasets:succinctly/midjourney-prompts",
"source_datasets:Gustavosta/Stable-Diffusion-Prompts",
"language:en",
"license:apache-2.0",
"text generation",
"region:us"
] | 2022-11-09T22:47:39+00:00 | {"language": ["en"], "license": "apache-2.0", "multilinguality": ["monolingual"], "source_datasets": ["bartman081523/stable-diffusion-discord-prompts", "succinctly/midjourney-prompts", "Gustavosta/Stable-Diffusion-Prompts"], "task_categories": ["text-generation", "feature-extraction"], "pretty_name": "multi text2image prompts a dataset collection", "tags": ["text generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "original", "data_files": [{"split": "train", "path": "original/train-*"}, {"split": "test", "path": "original/test-*"}]}], "dataset_info": [{"config_name": "default", "features": [{"name": "text", "dtype": "string"}, {"name": "src_dataset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 262736830, "num_examples": 1677221}, {"name": "test", "num_bytes": 56294291, "num_examples": 292876}], "download_size": 151054782, "dataset_size": 319031121}, {"config_name": "original", "features": [{"name": "text", "dtype": "string"}, {"name": "src_dataset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 741427383, "num_examples": 3551734}, {"name": "test", "num_bytes": 83615440, "num_examples": 399393}], "download_size": 402186258, "dataset_size": 825042823}]} | 2023-11-21T13:19:29+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-feature-extraction #multilinguality-monolingual #source_datasets-bartman081523/stable-diffusion-discord-prompts #source_datasets-succinctly/midjourney-prompts #source_datasets-Gustavosta/Stable-Diffusion-Prompts #language-English #license-apache-2.0 #text generation #region-us
|
# text2image multi-prompt(s): a dataset collection
- collection of several text2image prompt datasets
- data was cleaned/normalized with the goal of removing "model specific APIs" like the "--ar" for Midjourney and so on
- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)
## updates
- Oct 2023: the 'default' config has been updated with better deduplication. It was deduplicated with minhash (_params: n-gram size set to 3, deduplication threshold at 0.6, hash function chosen as xxh3 with 32-bit hash bits, and 128 permutations with a batch size of 10,000._) which drops 2+ million rows.
- original version is still available under 'config_name="original"'
## contents
default:
For 'original' config:
_NOTE: as the other two datasets did not have a 'validation' split, the validation split of 'succinctly/midjourney-prompts' was merged into 'train'._ | [
"# text2image multi-prompt(s): a dataset collection\n\n- collection of several text2image prompt datasets\n- data was cleaned/normalized with the goal of removing \"model specific APIs\" like the \"--ar\" for Midjourney and so on\n- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)",
"## updates\n\n- Oct 2023: the 'default' config has been updated with better deduplication. It was deduplicated with minhash (_params: n-gram size set to 3, deduplication threshold at 0.6, hash function chosen as xxh3 with 32-bit hash bits, and 128 permutations with a batch size of 10,000._) which drops 2+ million rows.\n - original version is still available under 'config_name=\"original\"'",
"## contents\n\n\n\ndefault:\n\n\n\nFor 'original' config:\n\n\n_NOTE: as the other two datasets did not have a 'validation' split, the validation split of 'succinctly/midjourney-prompts' was merged into 'train'._"
] | [
"TAGS\n#task_categories-text-generation #task_categories-feature-extraction #multilinguality-monolingual #source_datasets-bartman081523/stable-diffusion-discord-prompts #source_datasets-succinctly/midjourney-prompts #source_datasets-Gustavosta/Stable-Diffusion-Prompts #language-English #license-apache-2.0 #text generation #region-us \n",
"# text2image multi-prompt(s): a dataset collection\n\n- collection of several text2image prompt datasets\n- data was cleaned/normalized with the goal of removing \"model specific APIs\" like the \"--ar\" for Midjourney and so on\n- data de-duplicated on a basic level: exactly duplicate prompts were dropped (_after cleaning and normalization_)",
"## updates\n\n- Oct 2023: the 'default' config has been updated with better deduplication. It was deduplicated with minhash (_params: n-gram size set to 3, deduplication threshold at 0.6, hash function chosen as xxh3 with 32-bit hash bits, and 128 permutations with a batch size of 10,000._) which drops 2+ million rows.\n - original version is still available under 'config_name=\"original\"'",
"## contents\n\n\n\ndefault:\n\n\n\nFor 'original' config:\n\n\n_NOTE: as the other two datasets did not have a 'validation' split, the validation split of 'succinctly/midjourney-prompts' was merged into 'train'._"
] |
3afe16b210dec396ba32a4c4669a951a13c8d1c0 | # Dataset Card for "quick-captioning-dataset-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | nateraw/quick-captioning-dataset-test | [
"region:us"
] | 2022-11-09T23:16:50+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 345244.0, "num_examples": 4}], "download_size": 0, "dataset_size": 345244.0}} | 2022-11-09T23:20:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quick-captioning-dataset-test"
More Information needed | [
"# Dataset Card for \"quick-captioning-dataset-test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quick-captioning-dataset-test\"\n\nMore Information needed"
] |
379266b9d42eae2923d3bb4e2fa5e9e4cdc608fe | # Dataset Card for "test_pinkeyrepo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | treksis/test_pinkeyrepo | [
"region:us"
] | 2022-11-10T00:01:22+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 906786.0, "num_examples": 5}], "download_size": 908031, "dataset_size": 906786.0}} | 2022-11-10T00:01:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_pinkeyrepo"
More Information needed | [
"# Dataset Card for \"test_pinkeyrepo\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_pinkeyrepo\"\n\nMore Information needed"
] |
ae03d5b8fc12f95b1b965ef6f3fabf29b6eaf2a8 |
## Description
The Spam SMS is a set of SMS-tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam.
Source: [uciml/sms-spam-collection-dataset](https://www.kaggle.com/datasets/uciml/sms-spam-collection-dataset) | Ngadou/Spam_SMS | [
"license:cc",
"doi:10.57967/hf/0749",
"region:us"
] | 2022-11-10T00:24:36+00:00 | {"license": "cc"} | 2022-11-10T09:06:25+00:00 | [] | [] | TAGS
#license-cc #doi-10.57967/hf/0749 #region-us
|
## Description
The Spam SMS is a set of SMS-tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam.
Source: uciml/sms-spam-collection-dataset | [
"## Description\n\nThe Spam SMS is a set of SMS-tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam.\n\nSource: uciml/sms-spam-collection-dataset"
] | [
"TAGS\n#license-cc #doi-10.57967/hf/0749 #region-us \n",
"## Description\n\nThe Spam SMS is a set of SMS-tagged messages that have been collected for SMS Spam research. It contains one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam.\n\nSource: uciml/sms-spam-collection-dataset"
] |
eddcf0f010fb54164d0ff44402da8be69ac3684b | Dataset contains queries for Problog database of facts about USA geography. Taken from [this source](https://www.cs.utexas.edu/users/ml/nldata/geoquery.html) | dvitel/geo | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_ids:language-modeling",
"task_ids:explanation-generation",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:other-en-prolog",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:gpl-2.0",
"geo",
"prolog",
"semantic-parsing",
"code-generation",
"region:us"
] | 2022-11-10T00:30:37+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["gpl-2.0"], "multilinguality": ["other-en-prolog"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation", "text2text-generation"], "task_ids": ["language-modeling", "explanation-generation"], "pretty_name": "GEO - semantic parsing to Geography Prolog queries", "tags": ["geo", "prolog", "semantic-parsing", "code-generation"]} | 2022-11-10T00:50:17+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-text2text-generation #task_ids-language-modeling #task_ids-explanation-generation #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-other-en-prolog #size_categories-n<1K #source_datasets-original #language-English #license-gpl-2.0 #geo #prolog #semantic-parsing #code-generation #region-us
| Dataset contains queries for Problog database of facts about USA geography. Taken from this source | [] | [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #task_ids-language-modeling #task_ids-explanation-generation #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-other-en-prolog #size_categories-n<1K #source_datasets-original #language-English #license-gpl-2.0 #geo #prolog #semantic-parsing #code-generation #region-us \n"
] |
fe7cf7c231bfd0366e56ed6242d1421d23483e1d | Datasets for HEARTHSTONE card game. Taken from [this source](https://github.com/deepmind/card2code/tree/master/third_party/hearthstone)
| dvitel/hearthstone | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:other-en-python",
"size_categories:n<1K",
"language:en",
"license:mit",
"code-synthesis",
"semantic-parsing",
"python",
"hearthstone",
"region:us"
] | 2022-11-10T01:13:57+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["mit"], "multilinguality": ["other-en-python"], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "HEARTHSTONE - synthesis of python code for card game descriptions", "tags": ["code-synthesis", "semantic-parsing", "python", "hearthstone"]} | 2022-11-10T01:24:14+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #multilinguality-other-en-python #size_categories-n<1K #language-English #license-mit #code-synthesis #semantic-parsing #python #hearthstone #region-us
| Datasets for HEARTHSTONE card game. Taken from this source
| [] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #multilinguality-other-en-python #size_categories-n<1K #language-English #license-mit #code-synthesis #semantic-parsing #python #hearthstone #region-us \n"
] |
904ada614d1d3dd374dd4752730b0db9017334df | # Stable Diffusion Prompts 200m
Because Diffusion-DB dataset is too big. So I extracted the prompts out for prompt study.
The file introduction:
- sd_promts_2m.txt : the main dataset.
- sd_top5000.keywords.tsv: the top 5000 frequent key words or phrase.
- | andyyang/stable_diffusion_prompts_2m | [
"license:cc0-1.0",
"region:us"
] | 2022-11-10T04:42:33+00:00 | {"license": "cc0-1.0"} | 2022-11-10T06:38:10+00:00 | [] | [] | TAGS
#license-cc0-1.0 #region-us
| # Stable Diffusion Prompts 200m
Because Diffusion-DB dataset is too big. So I extracted the prompts out for prompt study.
The file introduction:
- sd_promts_2m.txt : the main dataset.
- sd_top5000.URL: the top 5000 frequent key words or phrase.
- | [
"# Stable Diffusion Prompts 200m \n\nBecause Diffusion-DB dataset is too big. So I extracted the prompts out for prompt study. \n\nThe file introduction:\n- sd_promts_2m.txt : the main dataset.\n- sd_top5000.URL: the top 5000 frequent key words or phrase.\n-"
] | [
"TAGS\n#license-cc0-1.0 #region-us \n",
"# Stable Diffusion Prompts 200m \n\nBecause Diffusion-DB dataset is too big. So I extracted the prompts out for prompt study. \n\nThe file introduction:\n- sd_promts_2m.txt : the main dataset.\n- sd_top5000.URL: the top 5000 frequent key words or phrase.\n-"
] |
8d62a7d805261fc2ffd233a4f31e33049d87eec4 | # Dataset Card for COYO-Labeled-300M
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd)
- **Repository:** [COYO repository](https://github.com/kakaobrain/coyo-dataset)
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [COYO email]([email protected])
### Dataset Summary
**COYO-Labeled-300M** is a dataset of **machine-labeled** 300M images-multi-label pairs. We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. We followed the same evaluation pipeline as in efficientnet-v2. The labels are top 50 most likely labels out of 21,841 classes from imagenet-21k. The label probabilies are provided rather than label so that the user can select threshold of their choice for multi-label classification use or can take top-1 class for single class classification use.
In other words, **COYO-Labeled-300M** is a ImageNet-like dataset. Instead of human labeled 1.25 million samples, it's machine-labeled 300 million samples. This dataset is similar to JFT-300M which is not released to the public.
### Supported Tasks and Leaderboards
We empirically validated the quality of COYO-Labeled-300M dataset by re-implementing popular model, [ViT](https://arxiv.org/abs/2010.11929).
We found that our ViT implementation trained on COYO-Labeled-300M performs similar to the performance numbers in the ViT paper trained on JFT-300M.
We also provide weights for the pretrained ViT model on COYO-Labeled-300M as well as its training & fine-tuning code.
### Languages
The labels in the COYO-Labeled-300M dataset consist of English.
## Dataset Structure
### Data Instances
Each instance in COYO-Labeled-300M represents multi-labels and image pair information with meta-attributes.
And we also provide label information, **imagenet21k_tree.pickle**.
```
{
'id': 315,
'url': 'https://a.1stdibscdn.com/pair-of-blue-and-white-table-lamps-for-sale/1121189/f_121556431538206028457/12155643_master.jpg?width=240',
'imagehash': 'daf5a50aae4aa54a',
'labels': [8087, 11054, 8086, 6614, 6966, 8193, 10576, 9710, 4334, 9909, 8090, 10104, 10105, 9602, 5278, 9547, 6978, 12011, 7272, 5273, 6279, 4279, 10903, 8656, 9601, 8795, 9326, 4606, 9907, 9106, 7574, 10006, 7257, 6959, 9758, 9039, 10682, 7164, 5888, 11654, 8201, 4546, 9238, 8197, 10882, 17380, 4470, 5275, 10537, 11548],
'label_probs': [0.4453125, 0.30419921875, 0.09417724609375, 0.033905029296875, 0.03240966796875, 0.0157928466796875, 0.01406097412109375, 0.01129150390625, 0.00978851318359375, 0.00841522216796875, 0.007720947265625, 0.00634002685546875, 0.0041656494140625, 0.004070281982421875, 0.002910614013671875, 0.0028018951416015625, 0.002262115478515625, 0.0020503997802734375, 0.0017080307006835938, 0.0016880035400390625, 0.0016679763793945312, 0.0016613006591796875, 0.0014324188232421875, 0.0012445449829101562, 0.0011739730834960938, 0.0010318756103515625, 0.0008969306945800781, 0.0008792877197265625, 0.0008726119995117188, 0.0008263587951660156, 0.0007123947143554688, 0.0006799697875976562, 0.0006561279296875, 0.0006542205810546875, 0.0006093978881835938, 0.0006046295166015625, 0.0005769729614257812, 0.00057220458984375, 0.0005636215209960938, 0.00055694580078125, 0.0005092620849609375, 0.000507354736328125, 0.000507354736328125, 0.000499725341796875, 0.000484466552734375, 0.0004456043243408203, 0.0004439353942871094, 0.0004355907440185547, 0.00043392181396484375, 0.00041866302490234375],
'width': 240,
'height': 240
}
```
### Data Fields
| name | type | description |
|--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) which is the same value that is mapped with the existing COYO-700M. |
| url | string | The image URL extracted from the `src` attribute of the `<img>` |
| imagehash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image |
| labels | sequence[integer] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 classes) |
| label_probs | sequence[float] | Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 probabilites) |
| width | integer | The width of the image |
| height | integer | The height of the image |
### Data Splits
Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s).
## Dataset Creation
### Curation Rationale
We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. Data sampling was done with a size similar to jft-300m, filtered by a specific threshold for probabilities for the top-1 label.
### Source Data
[COYO-700M](https://huggingface.co/datasets/kakaobrain/coyo-700m)
#### Who are the source language producers?
[Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M.
### Annotations
#### Annotation process
The dataset was built in a fully automated process that did not require human annotation.
#### Who are the annotators?
No human annotation
### Personal and Sensitive Information
The basic instruction, licenses and contributors are the same as for the [coyo-700m](https://huggingface.co/datasets/kakaobrain/coyo-700m).
| kakaobrain/coyo-labeled-300m | [
"task_categories:image-classification",
"task_ids:multi-label-image-classification",
"annotations_creators:no-annotation",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"image-labeled pairs",
"arxiv:2010.11929",
"region:us"
] | 2022-11-10T06:30:56+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-label-image-classification"], "pretty_name": "COYO-Labeled-300M", "tags": ["image-labeled pairs"]} | 2022-11-11T01:11:22+00:00 | [
"2010.11929"
] | [
"en"
] | TAGS
#task_categories-image-classification #task_ids-multi-label-image-classification #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-cc-by-4.0 #image-labeled pairs #arxiv-2010.11929 #region-us
| Dataset Card for COYO-Labeled-300M
==================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: COYO homepage
* Repository: COYO repository
* Paper:
* Leaderboard:
* Point of Contact: COYO email
### Dataset Summary
COYO-Labeled-300M is a dataset of machine-labeled 300M images-multi-label pairs. We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. We followed the same evaluation pipeline as in efficientnet-v2. The labels are top 50 most likely labels out of 21,841 classes from imagenet-21k. The label probabilies are provided rather than label so that the user can select threshold of their choice for multi-label classification use or can take top-1 class for single class classification use.
In other words, COYO-Labeled-300M is a ImageNet-like dataset. Instead of human labeled 1.25 million samples, it's machine-labeled 300 million samples. This dataset is similar to JFT-300M which is not released to the public.
### Supported Tasks and Leaderboards
We empirically validated the quality of COYO-Labeled-300M dataset by re-implementing popular model, ViT.
We found that our ViT implementation trained on COYO-Labeled-300M performs similar to the performance numbers in the ViT paper trained on JFT-300M.
We also provide weights for the pretrained ViT model on COYO-Labeled-300M as well as its training & fine-tuning code.
### Languages
The labels in the COYO-Labeled-300M dataset consist of English.
Dataset Structure
-----------------
### Data Instances
Each instance in COYO-Labeled-300M represents multi-labels and image pair information with meta-attributes.
And we also provide label information, imagenet21k\_tree.pickle.
### Data Fields
name: id, type: long, description: Unique 64-bit integer ID generated by monotonically\_increasing\_id() which is the same value that is mapped with the existing COYO-700M.
name: url, type: string, description: The image URL extracted from the 'src' attribute of the '![]()'
name: imagehash, type: string, description: The perceptual hash(pHash) of the image
name: labels, type: sequence[integer], description: Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 classes)
name: label\_probs, type: sequence[float], description: Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 probabilites)
name: width, type: integer, description: The width of the image
name: height, type: integer, description: The height of the image
### Data Splits
Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s).
Dataset Creation
----------------
### Curation Rationale
We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. Data sampling was done with a size similar to jft-300m, filtered by a specific threshold for probabilities for the top-1 label.
### Source Data
COYO-700M
#### Who are the source language producers?
Common Crawl is the data source for COYO-700M.
### Annotations
#### Annotation process
The dataset was built in a fully automated process that did not require human annotation.
#### Who are the annotators?
No human annotation
### Personal and Sensitive Information
The basic instruction, licenses and contributors are the same as for the coyo-700m.
| [
"### Dataset Summary\n\n\nCOYO-Labeled-300M is a dataset of machine-labeled 300M images-multi-label pairs. We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. We followed the same evaluation pipeline as in efficientnet-v2. The labels are top 50 most likely labels out of 21,841 classes from imagenet-21k. The label probabilies are provided rather than label so that the user can select threshold of their choice for multi-label classification use or can take top-1 class for single class classification use.\n\n\nIn other words, COYO-Labeled-300M is a ImageNet-like dataset. Instead of human labeled 1.25 million samples, it's machine-labeled 300 million samples. This dataset is similar to JFT-300M which is not released to the public.",
"### Supported Tasks and Leaderboards\n\n\nWe empirically validated the quality of COYO-Labeled-300M dataset by re-implementing popular model, ViT.\nWe found that our ViT implementation trained on COYO-Labeled-300M performs similar to the performance numbers in the ViT paper trained on JFT-300M.\nWe also provide weights for the pretrained ViT model on COYO-Labeled-300M as well as its training & fine-tuning code.",
"### Languages\n\n\nThe labels in the COYO-Labeled-300M dataset consist of English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach instance in COYO-Labeled-300M represents multi-labels and image pair information with meta-attributes. \n\nAnd we also provide label information, imagenet21k\\_tree.pickle.",
"### Data Fields\n\n\nname: id, type: long, description: Unique 64-bit integer ID generated by monotonically\\_increasing\\_id() which is the same value that is mapped with the existing COYO-700M.\nname: url, type: string, description: The image URL extracted from the 'src' attribute of the '![]()'\nname: imagehash, type: string, description: The perceptual hash(pHash) of the image\nname: labels, type: sequence[integer], description: Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 classes)\nname: label\\_probs, type: sequence[float], description: Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 probabilites)\nname: width, type: integer, description: The width of the image\nname: height, type: integer, description: The height of the image",
"### Data Splits\n\n\nData was not split, since the evaluation was expected to be performed on more widely used downstream task(s).\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nWe labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. Data sampling was done with a size similar to jft-300m, filtered by a specific threshold for probabilities for the top-1 label.",
"### Source Data\n\n\nCOYO-700M",
"#### Who are the source language producers?\n\n\nCommon Crawl is the data source for COYO-700M.",
"### Annotations",
"#### Annotation process\n\n\nThe dataset was built in a fully automated process that did not require human annotation.",
"#### Who are the annotators?\n\n\nNo human annotation",
"### Personal and Sensitive Information\n\n\nThe basic instruction, licenses and contributors are the same as for the coyo-700m."
] | [
"TAGS\n#task_categories-image-classification #task_ids-multi-label-image-classification #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-cc-by-4.0 #image-labeled pairs #arxiv-2010.11929 #region-us \n",
"### Dataset Summary\n\n\nCOYO-Labeled-300M is a dataset of machine-labeled 300M images-multi-label pairs. We labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. We followed the same evaluation pipeline as in efficientnet-v2. The labels are top 50 most likely labels out of 21,841 classes from imagenet-21k. The label probabilies are provided rather than label so that the user can select threshold of their choice for multi-label classification use or can take top-1 class for single class classification use.\n\n\nIn other words, COYO-Labeled-300M is a ImageNet-like dataset. Instead of human labeled 1.25 million samples, it's machine-labeled 300 million samples. This dataset is similar to JFT-300M which is not released to the public.",
"### Supported Tasks and Leaderboards\n\n\nWe empirically validated the quality of COYO-Labeled-300M dataset by re-implementing popular model, ViT.\nWe found that our ViT implementation trained on COYO-Labeled-300M performs similar to the performance numbers in the ViT paper trained on JFT-300M.\nWe also provide weights for the pretrained ViT model on COYO-Labeled-300M as well as its training & fine-tuning code.",
"### Languages\n\n\nThe labels in the COYO-Labeled-300M dataset consist of English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach instance in COYO-Labeled-300M represents multi-labels and image pair information with meta-attributes. \n\nAnd we also provide label information, imagenet21k\\_tree.pickle.",
"### Data Fields\n\n\nname: id, type: long, description: Unique 64-bit integer ID generated by monotonically\\_increasing\\_id() which is the same value that is mapped with the existing COYO-700M.\nname: url, type: string, description: The image URL extracted from the 'src' attribute of the '![]()'\nname: imagehash, type: string, description: The perceptual hash(pHash) of the image\nname: labels, type: sequence[integer], description: Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 classes)\nname: label\\_probs, type: sequence[float], description: Inference results of EfficientNetV2-XL model trained on ImageNet-21K dataset (Top 50 indices among 21,841 probabilites)\nname: width, type: integer, description: The width of the image\nname: height, type: integer, description: The height of the image",
"### Data Splits\n\n\nData was not split, since the evaluation was expected to be performed on more widely used downstream task(s).\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nWe labeled subset of COYO-700M with a large model (efficientnetv2-xl) trained on imagenet-21k. Data sampling was done with a size similar to jft-300m, filtered by a specific threshold for probabilities for the top-1 label.",
"### Source Data\n\n\nCOYO-700M",
"#### Who are the source language producers?\n\n\nCommon Crawl is the data source for COYO-700M.",
"### Annotations",
"#### Annotation process\n\n\nThe dataset was built in a fully automated process that did not require human annotation.",
"#### Who are the annotators?\n\n\nNo human annotation",
"### Personal and Sensitive Information\n\n\nThe basic instruction, licenses and contributors are the same as for the coyo-700m."
] |
10f0d626a402d8a2ef4a98e5d0e41201bdd8a61f | # 1. Overview
This dataset is a collection of 5,000+ images of clothing & apparels set that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects.
# 2. Use case
The e-commerce apparel dataset could be used for various AI & Computer Vision models: Product Visual Search, Similar Product Recommendation, Product Catalog,... Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.
# 3. About PIXTA
PIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at https://www.pixta.ai/ or contact via our email [email protected]." | pixta-ai/e-commerce-apparel-dataset-for-ai-ml | [
"license:other",
"region:us"
] | 2022-11-10T08:03:47+00:00 | {"license": "other"} | 2023-02-22T14:21:46+00:00 | [] | [] | TAGS
#license-other #region-us
| # 1. Overview
This dataset is a collection of 5,000+ images of clothing & apparels set that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects.
# 2. Use case
The e-commerce apparel dataset could be used for various AI & Computer Vision models: Product Visual Search, Similar Product Recommendation, Product Catalog,... Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.
# 3. About PIXTA
PIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at URL or contact via our email contact@URL." | [
"# 1. Overview\nThis dataset is a collection of 5,000+ images of clothing & apparels set that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects.",
"# 2. Use case\nThe e-commerce apparel dataset could be used for various AI & Computer Vision models: Product Visual Search, Similar Product Recommendation, Product Catalog,... Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.",
"# 3. About PIXTA\nPIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at URL or contact via our email contact@URL.\""
] | [
"TAGS\n#license-other #region-us \n",
"# 1. Overview\nThis dataset is a collection of 5,000+ images of clothing & apparels set that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects.",
"# 2. Use case\nThe e-commerce apparel dataset could be used for various AI & Computer Vision models: Product Visual Search, Similar Product Recommendation, Product Catalog,... Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.",
"# 3. About PIXTA\nPIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands. Visit us at URL or contact via our email contact@URL.\""
] |
52c2eb978a809403513e188df36f895cc9067eaf | # Dataset Card for "mnli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lucadiliello/mnli | [
"region:us"
] | 2022-11-10T10:07:25+00:00 | {"dataset_info": {"features": [{"name": "key", "dtype": "string"}, {"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 1869989, "num_examples": 9815}, {"name": "dev_mismatched", "num_bytes": 1985345, "num_examples": 9832}, {"name": "test_matched", "num_bytes": 1884664, "num_examples": 9796}, {"name": "test_mismatched", "num_bytes": 1986695, "num_examples": 9847}, {"name": "train", "num_bytes": 76786075, "num_examples": 392702}], "download_size": 54416761, "dataset_size": 84512768}} | 2022-11-10T10:08:49+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "mnli"
More Information needed | [
"# Dataset Card for \"mnli\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"mnli\"\n\nMore Information needed"
] |
57f637d30f7a4c5ff44ecd64a63763179bd824e5 | # Dataset Card for "dalio-handwritten-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-handwritten-io | [
"region:us"
] | 2022-11-10T11:38:04+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 14786, "num_examples": 10}, {"name": "train", "num_bytes": 186546, "num_examples": 156}, {"name": "validation", "num_bytes": 31729, "num_examples": 29}], "download_size": 114870, "dataset_size": 233061}} | 2022-11-10T11:41:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dalio-handwritten-io"
More Information needed | [
"# Dataset Card for \"dalio-handwritten-io\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dalio-handwritten-io\"\n\nMore Information needed"
] |
b407d59e558e452bf6bc72f3365d4a622c7fe4f7 | # Dataset Card for "dalio-handwritten-complete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-handwritten-complete | [
"region:us"
] | 2022-11-10T11:38:28+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 11957, "num_examples": 10}, {"name": "train", "num_bytes": 80837, "num_examples": 55}, {"name": "validation", "num_bytes": 13340, "num_examples": 10}], "download_size": 79024, "dataset_size": 106134}} | 2022-11-10T11:41:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dalio-handwritten-complete"
More Information needed | [
"# Dataset Card for \"dalio-handwritten-complete\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dalio-handwritten-complete\"\n\nMore Information needed"
] |
248a2ed0252e2ff647f27fe49276a697a9c583ab | # Dataset Card for "dalio-synthetic-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-synthetic-io | [
"region:us"
] | 2022-11-10T11:43:41+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 34283, "num_examples": 19}, {"name": "train", "num_bytes": 483245, "num_examples": 303}, {"name": "validation", "num_bytes": 84125, "num_examples": 57}], "download_size": 299043, "dataset_size": 601653}} | 2022-11-10T11:44:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dalio-synthetic-io"
More Information needed | [
"# Dataset Card for \"dalio-synthetic-io\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dalio-synthetic-io\"\n\nMore Information needed"
] |
0ee966aee92c0ceb06da61cb67cb0b8a5261785d | # Dataset Card for "dalio-synthetic-complete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-synthetic-complete | [
"region:us"
] | 2022-11-10T11:44:06+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 24972, "num_examples": 19}, {"name": "train", "num_bytes": 209033, "num_examples": 118}, {"name": "validation", "num_bytes": 48527, "num_examples": 22}], "download_size": 165396, "dataset_size": 282532}} | 2022-11-10T11:44:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dalio-synthetic-complete"
More Information needed | [
"# Dataset Card for \"dalio-synthetic-complete\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dalio-synthetic-complete\"\n\nMore Information needed"
] |
a6415c44a59cc8dcfbf1aa722cc45c8a87e2819c | # Dataset Card for "dalio-all-io"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-all-io | [
"region:us"
] | 2022-11-10T11:44:43+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 40070, "num_examples": 29}, {"name": "train", "num_bytes": 676060, "num_examples": 459}, {"name": "validation", "num_bytes": 118584, "num_examples": 86}], "download_size": 399681, "dataset_size": 834714}} | 2022-11-10T11:45:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dalio-all-io"
More Information needed | [
"# Dataset Card for \"dalio-all-io\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dalio-all-io\"\n\nMore Information needed"
] |
b6c482ef27596ffcd34956b45eedf37b1ccfc5cb | # Dataset Card for "dalio-all-complete"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/dalio-all-complete | [
"region:us"
] | 2022-11-10T11:45:10+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 28784, "num_examples": 29}, {"name": "train", "num_bytes": 302691, "num_examples": 173}, {"name": "validation", "num_bytes": 54939, "num_examples": 33}], "download_size": 210354, "dataset_size": 386414}} | 2022-11-10T11:45:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dalio-all-complete"
More Information needed | [
"# Dataset Card for \"dalio-all-complete\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dalio-all-complete\"\n\nMore Information needed"
] |
5f22c8d924620cb0aed0dbb6fcd488b98c1b79e6 | # Dataset Card for "shell_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/shell_paths | [
"region:us"
] | 2022-11-10T12:04:16+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 99354502, "num_examples": 3657232}], "download_size": 82635721, "dataset_size": 99354502}} | 2022-11-10T12:04:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "shell_paths"
More Information needed | [
"# Dataset Card for \"shell_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"shell_paths\"\n\nMore Information needed"
] |
9fd38e27d47abd2e31ea9449d0a3244ef9cdb9e5 | # Dataset Card for "cmake_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/cmake_paths | [
"region:us"
] | 2022-11-10T12:05:46+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14898478, "num_examples": 559316}], "download_size": 7920865, "dataset_size": 14898478}} | 2022-11-10T12:05:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "cmake_paths"
More Information needed | [
"# Dataset Card for \"cmake_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"cmake_paths\"\n\nMore Information needed"
] |
8d7956373a46b61d5dbbc93eaafac34dbec7f442 | # Dataset Card for "cpp_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/cpp_paths | [
"region:us"
] | 2022-11-10T12:11:26+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 339979633, "num_examples": 13541537}], "download_size": 250743754, "dataset_size": 339979633}} | 2022-11-10T12:11:49+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "cpp_paths"
More Information needed | [
"# Dataset Card for \"cpp_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"cpp_paths\"\n\nMore Information needed"
] |
d9fabc34754e7840bbeaae7c93e51ebee7163cf5 | # Dataset Card for "dockerfile_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/dockerfile_paths | [
"region:us"
] | 2022-11-10T12:12:30+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36265516, "num_examples": 1274173}], "download_size": 23300431, "dataset_size": 36265516}} | 2022-11-10T12:12:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dockerfile_paths"
More Information needed | [
"# Dataset Card for \"dockerfile_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dockerfile_paths\"\n\nMore Information needed"
] |
92274710c3b10948f908f2bcc6ad18d4ae46fcbe |
This dataset simply loads Google's Analysis-Ready Cloud Optimized ERA5 Reanalysis dataset from Google Public Datasets. | openclimatefix/arco-era5 | [
"license:apache-2.0",
"region:us"
] | 2022-11-10T12:14:40+00:00 | {"license": "apache-2.0"} | 2022-11-10T12:15:34+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
This dataset simply loads Google's Analysis-Ready Cloud Optimized ERA5 Reanalysis dataset from Google Public Datasets. | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
adda3417bbc9cb098de689b7ff70c50abe247735 | # Dataset Card for "html_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/html_paths | [
"region:us"
] | 2022-11-10T12:27:27+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 904459341, "num_examples": 32312575}], "download_size": 586813218, "dataset_size": 904459341}} | 2022-11-10T12:28:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "html_paths"
More Information needed | [
"# Dataset Card for \"html_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"html_paths\"\n\nMore Information needed"
] |
4571f733649eb652dc3f5177bef1ec9d50b23f76 | # Dataset Card for "lua_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/lua_paths | [
"region:us"
] | 2022-11-10T12:28:24+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21014952, "num_examples": 808034}], "download_size": 11839424, "dataset_size": 21014952}} | 2022-11-10T12:28:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lua_paths"
More Information needed | [
"# Dataset Card for \"lua_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lua_paths\"\n\nMore Information needed"
] |
c6c5c093ae298bc26353208f8fde21b423857736 | # Dataset Card for "css_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/css_paths | [
"region:us"
] | 2022-11-10T12:31:19+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 158651499, "num_examples": 5726933}], "download_size": 138140586, "dataset_size": 158651499}} | 2022-11-10T12:31:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "css_paths"
More Information needed | [
"# Dataset Card for \"css_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"css_paths\"\n\nMore Information needed"
] |
c63e568f42042af30725cbb49a850dd5baa5f528 | # Dataset Card for "visual-basic_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/visual-basic_paths | [
"region:us"
] | 2022-11-10T12:31:42+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5643571, "num_examples": 200013}], "download_size": 1586937, "dataset_size": 5643571}} | 2022-11-10T12:31:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "visual-basic_paths"
More Information needed | [
"# Dataset Card for \"visual-basic_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"visual-basic_paths\"\n\nMore Information needed"
] |
a8591f85e6cd6f947bdd9363baefd5cc922951ad | # Dataset Card for "sql_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/sql_paths | [
"region:us"
] | 2022-11-10T12:32:20+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35050567, "num_examples": 1267490}], "download_size": 23626806, "dataset_size": 35050567}} | 2022-11-10T12:32:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sql_paths"
More Information needed | [
"# Dataset Card for \"sql_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sql_paths\"\n\nMore Information needed"
] |
d0c81062b8b7d00c6beb0ef721f1c81d97ead65d | # Dataset Card for "tex_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/tex_paths | [
"region:us"
] | 2022-11-10T12:32:40+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12350897, "num_examples": 448193}], "download_size": 6578383, "dataset_size": 12350897}} | 2022-11-10T12:32:48+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tex_paths"
More Information needed | [
"# Dataset Card for \"tex_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tex_paths\"\n\nMore Information needed"
] |
9ff5bde3da778a10f68fc440cebdf798f08e6c61 | # Dataset Card for "php_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/php_paths | [
"region:us"
] | 2022-11-10T12:39:49+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 910857017, "num_examples": 34179448}], "download_size": 787090086, "dataset_size": 910857017}} | 2022-11-10T12:40:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "php_paths"
More Information needed | [
"# Dataset Card for \"php_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"php_paths\"\n\nMore Information needed"
] |
69dc5d4d868e74ccbb29f887b6fdbeded3447ffd | # Dataset Card for "julia_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/julia_paths | [
"region:us"
] | 2022-11-10T12:40:54+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14862518, "num_examples": 473425}], "download_size": 7932474, "dataset_size": 14862518}} | 2022-11-10T12:41:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "julia_paths"
More Information needed | [
"# Dataset Card for \"julia_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"julia_paths\"\n\nMore Information needed"
] |
ff60a980c29a5af1c4dddbbdbc475fe6106ad698 | # Dataset Card for "assembly_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/assembly_paths | [
"region:us"
] | 2022-11-10T12:41:09+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7492209, "num_examples": 324343}], "download_size": 2131380, "dataset_size": 7492209}} | 2022-11-10T12:41:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "assembly_paths"
More Information needed | [
"# Dataset Card for \"assembly_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"assembly_paths\"\n\nMore Information needed"
] |
a4b3aab622234840a05f4c79520adbc9a7179844 | # Dataset Card for "makefile_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/makefile_paths | [
"region:us"
] | 2022-11-10T12:41:35+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28586262, "num_examples": 1087444}], "download_size": 22517681, "dataset_size": 28586262}} | 2022-11-10T12:41:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "makefile_paths"
More Information needed | [
"# Dataset Card for \"makefile_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"makefile_paths\"\n\nMore Information needed"
] |
933d86766ee35adaa6be89d23a30229113bf7f35 | # Dataset Card for "javascript_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/javascript_paths | [
"region:us"
] | 2022-11-10T12:54:11+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1086652130, "num_examples": 39278951}], "download_size": 931947481, "dataset_size": 1086652130}} | 2022-11-10T12:55:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "javascript_paths"
More Information needed | [
"# Dataset Card for \"javascript_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"javascript_paths\"\n\nMore Information needed"
] |
2e12d8e250ea5827ad64d8481e2dd01122c0bb91 | # Dataset Card for "markdown_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/markdown_paths | [
"region:us"
] | 2022-11-10T13:01:33+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 821714901, "num_examples": 28965353}], "download_size": 663085249, "dataset_size": 821714901}} | 2022-11-10T13:02:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "markdown_paths"
More Information needed | [
"# Dataset Card for \"markdown_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"markdown_paths\"\n\nMore Information needed"
] |
b60d3d46f9388d56418c4f7fea1904c3cd6bc4bc | # Dataset Card for "batchfile_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/batchfile_paths | [
"region:us"
] | 2022-11-10T13:02:31+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11616420, "num_examples": 423086}], "download_size": 8986923, "dataset_size": 11616420}} | 2022-11-10T13:02:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "batchfile_paths"
More Information needed | [
"# Dataset Card for \"batchfile_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"batchfile_paths\"\n\nMore Information needed"
] |
a480c7b3807ccb6f174055ee918386bc4016975f | # Dataset Card for "c_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/c_paths | [
"region:us"
] | 2022-11-10T13:08:20+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 508253008, "num_examples": 19878729}], "download_size": 359733499, "dataset_size": 508253008}} | 2022-11-10T13:08:59+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "c_paths"
More Information needed | [
"# Dataset Card for \"c_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"c_paths\"\n\nMore Information needed"
] |
66d6e22b5b7a0acf146d5c9bf9a89934b9012d07 | # Dataset Card for "ruby_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/ruby_paths | [
"region:us"
] | 2022-11-10T13:10:12+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 169345268, "num_examples": 6390966}], "download_size": 118905787, "dataset_size": 169345268}} | 2022-11-10T13:10:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ruby_paths"
More Information needed | [
"# Dataset Card for \"ruby_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ruby_paths\"\n\nMore Information needed"
] |
aa66f3d11cfbdf7c557af0a7252cb2550413770d | # Dataset Card for "haskell_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/haskell_paths | [
"region:us"
] | 2022-11-10T13:10:45+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23059551, "num_examples": 921236}], "download_size": 12139516, "dataset_size": 23059551}} | 2022-11-10T13:10:54+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "haskell_paths"
More Information needed | [
"# Dataset Card for \"haskell_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"haskell_paths\"\n\nMore Information needed"
] |
bc25ad35e65d45db2445c945f47da9b4ed4fcca4 | # Dataset Card for "fortran_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/fortran_paths | [
"region:us"
] | 2022-11-10T13:11:00+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5773596, "num_examples": 243762}], "download_size": 1463437, "dataset_size": 5773596}} | 2022-11-10T13:11:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "fortran_paths"
More Information needed | [
"# Dataset Card for \"fortran_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"fortran_paths\"\n\nMore Information needed"
] |
c2e203ac9c1a0484cf21a7fd6fff2104f0031b31 | # Dataset Card for "c-sharp_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/c-sharp_paths | [
"region:us"
] | 2022-11-10T13:15:33+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 586063746, "num_examples": 20539828}], "download_size": 439948378, "dataset_size": 586063746}} | 2022-11-10T13:16:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "c-sharp_paths"
More Information needed | [
"# Dataset Card for \"c-sharp_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"c-sharp_paths\"\n\nMore Information needed"
] |
9a8a57728e5c5b7dd93f45e9cf93b45d0b8ab54a | # Dataset Card for "rust_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/rust_paths | [
"region:us"
] | 2022-11-10T13:17:08+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 71297350, "num_examples": 3087525}], "download_size": 49706578, "dataset_size": 71297350}} | 2022-11-10T13:17:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "rust_paths"
More Information needed | [
"# Dataset Card for \"rust_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"rust_paths\"\n\nMore Information needed"
] |
f3f85f8988d13dc427f042efc6e603481d8d3a08 | # Dataset Card for "typescript_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/typescript_paths | [
"region:us"
] | 2022-11-10T13:21:42+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 536493166, "num_examples": 19441648}], "download_size": 434213958, "dataset_size": 536493166}} | 2022-11-10T13:22:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "typescript_paths"
More Information needed | [
"# Dataset Card for \"typescript_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"typescript_paths\"\n\nMore Information needed"
] |
70b7bb8cd6c36c701f62f41b0635c8124ac8336d | # Dataset Card for "scala_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/scala_paths | [
"region:us"
] | 2022-11-10T13:22:51+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 68488532, "num_examples": 2635793}], "download_size": 35187635, "dataset_size": 68488532}} | 2022-11-10T13:23:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "scala_paths"
More Information needed | [
"# Dataset Card for \"scala_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"scala_paths\"\n\nMore Information needed"
] |
03199d030fdc050ffa9df8a29028d3128fea03ad | # Dataset Card for "python_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/python_paths | [
"region:us"
] | 2022-11-10T13:28:40+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 636121755, "num_examples": 23578465}], "download_size": 550836738, "dataset_size": 636121755}} | 2022-11-10T13:29:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "python_paths"
More Information needed | [
"# Dataset Card for \"python_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"python_paths\"\n\nMore Information needed"
] |
471372c3b0255eff62320029c85ab2cd40afd8dc | # Dataset Card for "perl_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/perl_paths | [
"region:us"
] | 2022-11-10T13:29:30+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14604805, "num_examples": 554602}], "download_size": 4964930, "dataset_size": 14604805}} | 2022-11-10T13:29:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "perl_paths"
More Information needed | [
"# Dataset Card for \"perl_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"perl_paths\"\n\nMore Information needed"
] |
6f7ef421b610ca5c88bbb30883740e3d040127aa | # Dataset Card for "go_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/go_paths | [
"region:us"
] | 2022-11-10T13:32:57+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 301556518, "num_examples": 12078461}], "download_size": 219608192, "dataset_size": 301556518}} | 2022-11-10T13:33:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "go_paths"
More Information needed | [
"# Dataset Card for \"go_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"go_paths\"\n\nMore Information needed"
] |
275fd28622d04fe9cba55698fc89a51fef7c5a80 | ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: librispeech-train-100
size_categories: []
source_datasets: []
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| bgstud/data-librispeech100 | [
"region:us"
] | 2022-11-10T13:33:32+00:00 | {} | 2022-11-10T13:39:01+00:00 | [] | [] | TAGS
#region-us
| ---
annotations_creators:
- found
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: librispeech-train-100
size_categories: []
source_datasets: []
tags: []
task_categories:
- automatic-speech-recognition
task_ids: []---
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
3b71db65661fda9f992bae1b64de5422d46b96fd | # Dataset Card for "java_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/java_paths | [
"region:us"
] | 2022-11-10T13:42:58+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1168673674, "num_examples": 43005815}], "download_size": 919178767, "dataset_size": 1168673674}} | 2022-11-10T13:44:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "java_paths"
More Information needed | [
"# Dataset Card for \"java_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"java_paths\"\n\nMore Information needed"
] |
ccad17bea366c31a400121ceab9c11b60811f7f2 | # Dataset Card for "powershell_paths"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cakiki/powershell_paths | [
"region:us"
] | 2022-11-10T13:44:21+00:00 | {"dataset_info": {"features": [{"name": "repository_name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15534114, "num_examples": 521952}], "download_size": 7947926, "dataset_size": 15534114}} | 2022-11-10T13:44:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "powershell_paths"
More Information needed | [
"# Dataset Card for \"powershell_paths\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"powershell_paths\"\n\nMore Information needed"
] |
e1e61ab74cb5165978b478962f0432a3209e194f |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| bgstud/libri | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-11-10T19:48:45+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["token-classification-other-acronym-identification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]} | 2022-11-10T20:03:23+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
30db3effcfc07b638291cfcff248b84dbd8013db | # Dataset Card for "saf_micro_job_german"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotation process](#annotation-process)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022)
### Dataset Summary
Short Answer Feedback (SAF) dataset is a short answer dataset introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 8 German questions used in micro-job training on the crowd-worker platform appJobber - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the [saf_communication_networks_english](https://huggingface.co/datasets/Short-Answer-Feedback/saf_communication_networks_english) dataset to examine the English subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at [saf_legal_domain_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_legal_domain_german).
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Frage 1: Ist das eine Frage?",
"reference_answer": "Ja, das ist eine Frage.",
"provided_answer": "Ich bin mir sicher, dass das eine Frage ist.",
"answer_feedback": "Korrekt!",
"verification_feedback": "Correct",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = 1), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `score`: a `float64` feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1226| 308| 271| 602|
## Dataset Creation
### Annotation Process
Two experienced appJobber employees were selected to evaluate the crowd-worker platform’s answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.
## Additional Information
### Citation Information
```
@inproceedings{filighera-etal-2022-answer,
title = "Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset",
author = "Filighera, Anna and
Parihar, Siddharth and
Steuer, Tim and
Meuser, Tobias and
Ochs, Sebastian",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.587",
doi = "10.18653/v1/2022.acl-long.587",
pages = "8577--8591",
}
```
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. | Short-Answer-Feedback/saf_micro_job_german | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:de",
"license:cc-by-4.0",
"short answer feedback",
"micro job",
"region:us"
] | 2022-11-10T21:21:46+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["de"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "pretty_name": "SAF - Micro Job - German", "tags": ["short answer feedback", "micro job"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}, {"name": "provided_answer", "dtype": "string"}, {"name": "answer_feedback", "dtype": "string"}, {"name": "verification_feedback", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 885526, "num_examples": 1226}, {"name": "validation", "num_bytes": 217946, "num_examples": 308}, {"name": "test_unseen_answers", "num_bytes": 198832, "num_examples": 271}, {"name": "test_unseen_questions", "num_bytes": 545524, "num_examples": 602}], "download_size": 274603, "dataset_size": 1847828}} | 2023-03-31T10:47:23+00:00 | [] | [
"de"
] | TAGS
#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-German #license-cc-by-4.0 #short answer feedback #micro job #region-us
| Dataset Card for "saf\_micro\_job\_german"
==========================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Annotation process
* Additional Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Paper: Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022)
### Dataset Summary
Short Answer Feedback (SAF) dataset is a short answer dataset introduced in Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 8 German questions used in micro-job training on the crowd-worker platform appJobber - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the saf\_communication\_networks\_english dataset to examine the English subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at saf\_legal\_domain\_german.
### Supported Tasks and Leaderboards
* 'short\_answer\_feedback': The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in German.
Dataset Structure
-----------------
### Data Instances
An example of an entry of the training split looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'id': a 'string' feature (UUID4 in HEX format).
* 'question': a 'string' feature representing a question.
* 'reference\_answer': a 'string' feature representing a reference answer to the question.
* 'provided\_answer': a 'string' feature representing an answer that was provided for a particular question.
* 'answer\_feedback': a 'string' feature representing the feedback given to the provided answers.
* 'verification\_feedback': a 'string' feature representing an automatic labeling of the score. It can be 'Correct' ('score' = 1), 'Incorrect' ('score' = 0) or 'Partially correct' (all intermediate scores).
* 'score': a 'float64' feature (between 0 and 1) representing the score given to the provided answer.
### Data Splits
The dataset is comprised of four data splits.
* 'train': used for training, contains a set of questions and the provided answers to them.
* 'validation': used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).
* 'test\_unseen\_answers': used for testing, contains unseen answers to the questions present in the 'train' split.
* 'test\_unseen\_questions': used for testing, contains unseen questions that do not appear in the 'train' split.
Dataset Creation
----------------
### Annotation Process
Two experienced appJobber employees were selected to evaluate the crowd-worker platform’s answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.
Additional Information
----------------------
### Contributions
Thanks to @JohnnyBoy2103 for adding this dataset.
| [
"### Dataset Summary\n\n\nShort Answer Feedback (SAF) dataset is a short answer dataset introduced in Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 8 German questions used in micro-job training on the crowd-worker platform appJobber - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the saf\\_communication\\_networks\\_english dataset to examine the English subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at saf\\_legal\\_domain\\_german.",
"### Supported Tasks and Leaderboards\n\n\n* 'short\\_answer\\_feedback': The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.",
"### Languages\n\n\nThe questions, reference answers, provided answers and the answer feedback in the dataset are written in German.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of an entry of the training split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'id': a 'string' feature (UUID4 in HEX format).\n* 'question': a 'string' feature representing a question.\n* 'reference\\_answer': a 'string' feature representing a reference answer to the question.\n* 'provided\\_answer': a 'string' feature representing an answer that was provided for a particular question.\n* 'answer\\_feedback': a 'string' feature representing the feedback given to the provided answers.\n* 'verification\\_feedback': a 'string' feature representing an automatic labeling of the score. It can be 'Correct' ('score' = 1), 'Incorrect' ('score' = 0) or 'Partially correct' (all intermediate scores).\n* 'score': a 'float64' feature (between 0 and 1) representing the score given to the provided answer.",
"### Data Splits\n\n\nThe dataset is comprised of four data splits.\n\n\n* 'train': used for training, contains a set of questions and the provided answers to them.\n* 'validation': used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).\n* 'test\\_unseen\\_answers': used for testing, contains unseen answers to the questions present in the 'train' split.\n* 'test\\_unseen\\_questions': used for testing, contains unseen questions that do not appear in the 'train' split.\n\n\n\nDataset Creation\n----------------",
"### Annotation Process\n\n\nTwo experienced appJobber employees were selected to evaluate the crowd-worker platform’s answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.\n\n\nAdditional Information\n----------------------",
"### Contributions\n\n\nThanks to @JohnnyBoy2103 for adding this dataset."
] | [
"TAGS\n#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-German #license-cc-by-4.0 #short answer feedback #micro job #region-us \n",
"### Dataset Summary\n\n\nShort Answer Feedback (SAF) dataset is a short answer dataset introduced in Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 8 German questions used in micro-job training on the crowd-worker platform appJobber - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the saf\\_communication\\_networks\\_english dataset to examine the English subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at saf\\_legal\\_domain\\_german.",
"### Supported Tasks and Leaderboards\n\n\n* 'short\\_answer\\_feedback': The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.",
"### Languages\n\n\nThe questions, reference answers, provided answers and the answer feedback in the dataset are written in German.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of an entry of the training split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'id': a 'string' feature (UUID4 in HEX format).\n* 'question': a 'string' feature representing a question.\n* 'reference\\_answer': a 'string' feature representing a reference answer to the question.\n* 'provided\\_answer': a 'string' feature representing an answer that was provided for a particular question.\n* 'answer\\_feedback': a 'string' feature representing the feedback given to the provided answers.\n* 'verification\\_feedback': a 'string' feature representing an automatic labeling of the score. It can be 'Correct' ('score' = 1), 'Incorrect' ('score' = 0) or 'Partially correct' (all intermediate scores).\n* 'score': a 'float64' feature (between 0 and 1) representing the score given to the provided answer.",
"### Data Splits\n\n\nThe dataset is comprised of four data splits.\n\n\n* 'train': used for training, contains a set of questions and the provided answers to them.\n* 'validation': used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).\n* 'test\\_unseen\\_answers': used for testing, contains unseen answers to the questions present in the 'train' split.\n* 'test\\_unseen\\_questions': used for testing, contains unseen questions that do not appear in the 'train' split.\n\n\n\nDataset Creation\n----------------",
"### Annotation Process\n\n\nTwo experienced appJobber employees were selected to evaluate the crowd-worker platform’s answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.\n\n\nAdditional Information\n----------------------",
"### Contributions\n\n\nThanks to @JohnnyBoy2103 for adding this dataset."
] |
9358d6f0f87371c0a5f150502b14cb16e382195f | # Dataset Card for "saf_communication_networks_english"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Annotation process](#annotation-process)
- [Additional Information](#additional-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022)
### Dataset Summary
Short Answer Feedback (SAF) dataset is a short answer dataset introduced in [Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset](https://aclanthology.org/2022.acl-long.587) (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 31 English questions covering a range of college-level communication networks topics - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the [saf_micro_job_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_micro_job_german) dataset to examine the German subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at [saf_legal_domain_german](https://huggingface.co/datasets/Short-Answer-Feedback/saf_legal_domain_german).
### Supported Tasks and Leaderboards
- `short_answer_feedback`: The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in English.
## Dataset Structure
### Data Instances
An example of an entry of the training split looks as follows.
```
{
"id": "1",
"question": "Is this a question?",
"reference_answer": "Yes, that is a question.",
"provided_answer": "I'm certain this is a question.",
"answer_feedback": "The response is correct.",
"verification_feedback": "Correct",
"score": 1
}
```
### Data Fields
The data fields are the same among all splits.
- `id`: a `string` feature (UUID4 in HEX format).
- `question`: a `string` feature representing a question.
- `reference_answer`: a `string` feature representing a reference answer to the question.
- `provided_answer`: a `string` feature representing an answer that was provided for a particular question.
- `answer_feedback`: a `string` feature representing the feedback given to the provided answers.
- `verification_feedback`: a `string` feature representing an automatic labeling of the score. It can be `Correct` (`score` = maximum points achievable), `Incorrect` (`score` = 0) or `Partially correct` (all intermediate scores).
- `score`: a `float64` feature representing the score given to the provided answer. For most questions it ranges from 0 to 1.
### Data Splits
The dataset is comprised of four data splits.
- `train`: used for training, contains a set of questions and the provided answers to them.
- `validation`: used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).
- `test_unseen_answers`: used for testing, contains unseen answers to the questions present in the `train` split.
- `test_unseen_questions`: used for testing, contains unseen questions that do not appear in the `train` split.
| Split |train|validation|test_unseen_answers|test_unseen_questions|
|-------------------|----:|---------:|------------------:|--------------------:|
|Number of instances| 1700| 427| 375| 479|
## Dataset Creation
### Annotation Process
Two graduate students who had completed the communication networks course were selected to evaluate the answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.
## Additional Information
### Citation Information
```
@inproceedings{filighera-etal-2022-answer,
title = "Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset",
author = "Filighera, Anna and
Parihar, Siddharth and
Steuer, Tim and
Meuser, Tobias and
Ochs, Sebastian",
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.acl-long.587",
doi = "10.18653/v1/2022.acl-long.587",
pages = "8577--8591",
}
```
### Contributions
Thanks to [@JohnnyBoy2103](https://github.com/JohnnyBoy2103) for adding this dataset. | Short-Answer-Feedback/saf_communication_networks_english | [
"task_categories:text2text-generation",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"short answer feedback",
"communication networks",
"region:us"
] | 2022-11-10T21:22:13+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation"], "pretty_name": "SAF - Communication Networks - English", "tags": ["short answer feedback", "communication networks"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "reference_answer", "dtype": "string"}, {"name": "provided_answer", "dtype": "string"}, {"name": "answer_feedback", "dtype": "string"}, {"name": "verification_feedback", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 2363828, "num_examples": 1700}, {"name": "validation", "num_bytes": 592869, "num_examples": 427}, {"name": "test_unseen_answers", "num_bytes": 515669, "num_examples": 375}, {"name": "test_unseen_questions", "num_bytes": 777945, "num_examples": 479}], "download_size": 941169, "dataset_size": 4250311}} | 2023-03-31T10:46:04+00:00 | [] | [
"en"
] | TAGS
#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #short answer feedback #communication networks #region-us
| Dataset Card for "saf\_communication\_networks\_english"
========================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Annotation process
* Additional Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Paper: Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022)
### Dataset Summary
Short Answer Feedback (SAF) dataset is a short answer dataset introduced in Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 31 English questions covering a range of college-level communication networks topics - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the saf\_micro\_job\_german dataset to examine the German subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at saf\_legal\_domain\_german.
### Supported Tasks and Leaderboards
* 'short\_answer\_feedback': The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.
### Languages
The questions, reference answers, provided answers and the answer feedback in the dataset are written in English.
Dataset Structure
-----------------
### Data Instances
An example of an entry of the training split looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'id': a 'string' feature (UUID4 in HEX format).
* 'question': a 'string' feature representing a question.
* 'reference\_answer': a 'string' feature representing a reference answer to the question.
* 'provided\_answer': a 'string' feature representing an answer that was provided for a particular question.
* 'answer\_feedback': a 'string' feature representing the feedback given to the provided answers.
* 'verification\_feedback': a 'string' feature representing an automatic labeling of the score. It can be 'Correct' ('score' = maximum points achievable), 'Incorrect' ('score' = 0) or 'Partially correct' (all intermediate scores).
* 'score': a 'float64' feature representing the score given to the provided answer. For most questions it ranges from 0 to 1.
### Data Splits
The dataset is comprised of four data splits.
* 'train': used for training, contains a set of questions and the provided answers to them.
* 'validation': used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).
* 'test\_unseen\_answers': used for testing, contains unseen answers to the questions present in the 'train' split.
* 'test\_unseen\_questions': used for testing, contains unseen questions that do not appear in the 'train' split.
Dataset Creation
----------------
### Annotation Process
Two graduate students who had completed the communication networks course were selected to evaluate the answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.
Additional Information
----------------------
### Contributions
Thanks to @JohnnyBoy2103 for adding this dataset.
| [
"### Dataset Summary\n\n\nShort Answer Feedback (SAF) dataset is a short answer dataset introduced in Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 31 English questions covering a range of college-level communication networks topics - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the saf\\_micro\\_job\\_german dataset to examine the German subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at saf\\_legal\\_domain\\_german.",
"### Supported Tasks and Leaderboards\n\n\n* 'short\\_answer\\_feedback': The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.",
"### Languages\n\n\nThe questions, reference answers, provided answers and the answer feedback in the dataset are written in English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of an entry of the training split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'id': a 'string' feature (UUID4 in HEX format).\n* 'question': a 'string' feature representing a question.\n* 'reference\\_answer': a 'string' feature representing a reference answer to the question.\n* 'provided\\_answer': a 'string' feature representing an answer that was provided for a particular question.\n* 'answer\\_feedback': a 'string' feature representing the feedback given to the provided answers.\n* 'verification\\_feedback': a 'string' feature representing an automatic labeling of the score. It can be 'Correct' ('score' = maximum points achievable), 'Incorrect' ('score' = 0) or 'Partially correct' (all intermediate scores).\n* 'score': a 'float64' feature representing the score given to the provided answer. For most questions it ranges from 0 to 1.",
"### Data Splits\n\n\nThe dataset is comprised of four data splits.\n\n\n* 'train': used for training, contains a set of questions and the provided answers to them.\n* 'validation': used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).\n* 'test\\_unseen\\_answers': used for testing, contains unseen answers to the questions present in the 'train' split.\n* 'test\\_unseen\\_questions': used for testing, contains unseen questions that do not appear in the 'train' split.\n\n\n\nDataset Creation\n----------------",
"### Annotation Process\n\n\nTwo graduate students who had completed the communication networks course were selected to evaluate the answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.\n\n\nAdditional Information\n----------------------",
"### Contributions\n\n\nThanks to @JohnnyBoy2103 for adding this dataset."
] | [
"TAGS\n#task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #short answer feedback #communication networks #region-us \n",
"### Dataset Summary\n\n\nShort Answer Feedback (SAF) dataset is a short answer dataset introduced in Your Answer is Incorrect... Would you like to know why? Introducing a Bilingual Short Answer Feedback Dataset (Filighera et al., ACL 2022) as a way to remedy the lack of content-focused feedback datasets. This version of the dataset contains 31 English questions covering a range of college-level communication networks topics - while the original dataset presented in the paper is comprised of an assortment of both English and German short answer questions (with reference answers). Please refer to the saf\\_micro\\_job\\_german dataset to examine the German subset of the original dataset. Furthermore, a similarly constructed SAF dataset (covering the German legal domain) can be found at saf\\_legal\\_domain\\_german.",
"### Supported Tasks and Leaderboards\n\n\n* 'short\\_answer\\_feedback': The dataset can be used to train a Text2Text Generation model from HuggingFace transformers in order to generate automatic short answer feedback.",
"### Languages\n\n\nThe questions, reference answers, provided answers and the answer feedback in the dataset are written in English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of an entry of the training split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'id': a 'string' feature (UUID4 in HEX format).\n* 'question': a 'string' feature representing a question.\n* 'reference\\_answer': a 'string' feature representing a reference answer to the question.\n* 'provided\\_answer': a 'string' feature representing an answer that was provided for a particular question.\n* 'answer\\_feedback': a 'string' feature representing the feedback given to the provided answers.\n* 'verification\\_feedback': a 'string' feature representing an automatic labeling of the score. It can be 'Correct' ('score' = maximum points achievable), 'Incorrect' ('score' = 0) or 'Partially correct' (all intermediate scores).\n* 'score': a 'float64' feature representing the score given to the provided answer. For most questions it ranges from 0 to 1.",
"### Data Splits\n\n\nThe dataset is comprised of four data splits.\n\n\n* 'train': used for training, contains a set of questions and the provided answers to them.\n* 'validation': used for validation, contains a set of questions and the provided answers to them (derived from the original training set defined in the paper).\n* 'test\\_unseen\\_answers': used for testing, contains unseen answers to the questions present in the 'train' split.\n* 'test\\_unseen\\_questions': used for testing, contains unseen questions that do not appear in the 'train' split.\n\n\n\nDataset Creation\n----------------",
"### Annotation Process\n\n\nTwo graduate students who had completed the communication networks course were selected to evaluate the answers, and both of them underwent a general annotation guideline training (supervised by a Psychology doctoral student with prior work in the field of feedback). After the training, the annotators individually provided feedback to the answers following an agreed upon scoring rubric and the general annotation guideline. The individually annotated answer files were then combined into a cohesive gold standard after discussing and solving possible disagreements.\n\n\nAdditional Information\n----------------------",
"### Contributions\n\n\nThanks to @JohnnyBoy2103 for adding this dataset."
] |
dfc63068215c270ac0f6702228fd80ea2ae170a5 | # TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages
This repo contains the code and data for the EMNLP 2022 findings paper TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages which can be found [here](https://aclanthology.org/2022.findings-emnlp.420/).
## Data
The TyDiP dataset is licensed under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license.
The `data` folder contains the different files we release as part of the TyDiP dataset. The TyDiP dataset comprises of an English train set and English test set that are adapted from the Stanford Politeness Corpus, and test data in 9 more languages (Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, Hungarian) that we annotated.
```
data/
├── all
├── binary
└── unlabelled_train_sets
```
`data/all` consists of the complete train and test sets.
`data/binary` is a filtered version of the above where sentences from the top and bottom 25 percentile of scores is only present. This is the data that we used for training and evaluation in the paper.
`data/unlabelled_train_sets`
## Code
`politeness_regresor.py` is used for training and evaluation of transformer models
To train a model
```
python politeness_regressor.py --train_file data/binary/en_train_binary.csv --test_file data/binary/en_test_binary.csv --model_save_location model.pt --pretrained_model xlm-roberta-large --gpus 1 --batch_size 4 --accumulate_grad_batches 8 --max_epochs 5 --checkpoint_callback False --logger False --precision 16 --train --test --binary --learning_rate 5e-6
```
To test this trained model on $lang
```
python politeness_regressor.py --test_file data/binary/${lang}_test_binary.csv --load_model model.pt --gpus 1 --batch_size 32 --test --binary
```
## Pretrained Model
XLM-Roberta Large finetuned on the English train set (as discussed and evaluated in the paper) can be found [here](https://huggingface.co/Genius1237/xlm-roberta-large-tydip)
## Politeness Strategies
`strategies` contains the processed strategy lexicon for different languages. `strategies/learnt_strategies.xlsx` contains the human edited strategies for 4 langauges
## Annotation Interface
`annotation.html` contains the UI used for conducting data annotation
## Citation
If you use the English train or test data, please cite the Stanford Politeness Dataset
```
@inproceedings{danescu-niculescu-mizil-etal-2013-computational,
title = "A computational approach to politeness with application to social factors",
author = "Danescu-Niculescu-Mizil, Cristian and
Sudhof, Moritz and
Jurafsky, Dan and
Leskovec, Jure and
Potts, Christopher",
booktitle = "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P13-1025",
pages = "250--259",
}
```
If you use the test data from the 9 target languages, please cite our paper
```
@inproceedings{srinivasan-choi-2022-tydip,
title = "{T}y{D}i{P}: A Dataset for Politeness Classification in Nine Typologically Diverse Languages",
author = "Srinivasan, Anirudh and
Choi, Eunsol",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.420",
pages = "5723--5738",
}
```
| Genius1237/TyDiP | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"language:en",
"language:hi",
"language:ko",
"language:es",
"language:ta",
"language:fr",
"language:vi",
"language:ru",
"language:af",
"language:hu",
"license:cc-by-4.0",
"politeness",
"wikipedia",
"multilingual",
"region:us"
] | 2022-11-11T01:08:56+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en", "hi", "ko", "es", "ta", "fr", "vi", "ru", "af", "hu"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "TyDiP", "tags": ["politeness", "wikipedia", "multilingual"]} | 2023-10-15T04:14:26+00:00 | [] | [
"en",
"hi",
"ko",
"es",
"ta",
"fr",
"vi",
"ru",
"af",
"hu"
] | TAGS
#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #language-English #language-Hindi #language-Korean #language-Spanish #language-Tamil #language-French #language-Vietnamese #language-Russian #language-Afrikaans #language-Hungarian #license-cc-by-4.0 #politeness #wikipedia #multilingual #region-us
| # TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages
This repo contains the code and data for the EMNLP 2022 findings paper TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages which can be found here.
## Data
The TyDiP dataset is licensed under the CC BY 4.0 license.
The 'data' folder contains the different files we release as part of the TyDiP dataset. The TyDiP dataset comprises of an English train set and English test set that are adapted from the Stanford Politeness Corpus, and test data in 9 more languages (Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, Hungarian) that we annotated.
'data/all' consists of the complete train and test sets.
'data/binary' is a filtered version of the above where sentences from the top and bottom 25 percentile of scores is only present. This is the data that we used for training and evaluation in the paper.
'data/unlabelled_train_sets'
## Code
'politeness_regresor.py' is used for training and evaluation of transformer models
To train a model
To test this trained model on $lang
## Pretrained Model
XLM-Roberta Large finetuned on the English train set (as discussed and evaluated in the paper) can be found here
## Politeness Strategies
'strategies' contains the processed strategy lexicon for different languages. 'strategies/learnt_strategies.xlsx' contains the human edited strategies for 4 langauges
## Annotation Interface
'URL' contains the UI used for conducting data annotation
If you use the English train or test data, please cite the Stanford Politeness Dataset
If you use the test data from the 9 target languages, please cite our paper
| [
"# TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages\nThis repo contains the code and data for the EMNLP 2022 findings paper TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages which can be found here.",
"## Data\nThe TyDiP dataset is licensed under the CC BY 4.0 license.\n\nThe 'data' folder contains the different files we release as part of the TyDiP dataset. The TyDiP dataset comprises of an English train set and English test set that are adapted from the Stanford Politeness Corpus, and test data in 9 more languages (Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, Hungarian) that we annotated.\n\n\n'data/all' consists of the complete train and test sets. \n'data/binary' is a filtered version of the above where sentences from the top and bottom 25 percentile of scores is only present. This is the data that we used for training and evaluation in the paper. \n'data/unlabelled_train_sets'",
"## Code\n'politeness_regresor.py' is used for training and evaluation of transformer models\n\nTo train a model\n\n\nTo test this trained model on $lang",
"## Pretrained Model\nXLM-Roberta Large finetuned on the English train set (as discussed and evaluated in the paper) can be found here",
"## Politeness Strategies\n'strategies' contains the processed strategy lexicon for different languages. 'strategies/learnt_strategies.xlsx' contains the human edited strategies for 4 langauges",
"## Annotation Interface\n'URL' contains the UI used for conducting data annotation\n\nIf you use the English train or test data, please cite the Stanford Politeness Dataset\n\nIf you use the test data from the 9 target languages, please cite our paper"
] | [
"TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-multilingual #size_categories-1K<n<10K #language-English #language-Hindi #language-Korean #language-Spanish #language-Tamil #language-French #language-Vietnamese #language-Russian #language-Afrikaans #language-Hungarian #license-cc-by-4.0 #politeness #wikipedia #multilingual #region-us \n",
"# TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages\nThis repo contains the code and data for the EMNLP 2022 findings paper TyDiP: A Dataset for Politeness Classification in Nine Typologically Diverse Languages which can be found here.",
"## Data\nThe TyDiP dataset is licensed under the CC BY 4.0 license.\n\nThe 'data' folder contains the different files we release as part of the TyDiP dataset. The TyDiP dataset comprises of an English train set and English test set that are adapted from the Stanford Politeness Corpus, and test data in 9 more languages (Hindi, Korean, Spanish, Tamil, French, Vietnamese, Russian, Afrikaans, Hungarian) that we annotated.\n\n\n'data/all' consists of the complete train and test sets. \n'data/binary' is a filtered version of the above where sentences from the top and bottom 25 percentile of scores is only present. This is the data that we used for training and evaluation in the paper. \n'data/unlabelled_train_sets'",
"## Code\n'politeness_regresor.py' is used for training and evaluation of transformer models\n\nTo train a model\n\n\nTo test this trained model on $lang",
"## Pretrained Model\nXLM-Roberta Large finetuned on the English train set (as discussed and evaluated in the paper) can be found here",
"## Politeness Strategies\n'strategies' contains the processed strategy lexicon for different languages. 'strategies/learnt_strategies.xlsx' contains the human edited strategies for 4 langauges",
"## Annotation Interface\n'URL' contains the UI used for conducting data annotation\n\nIf you use the English train or test data, please cite the Stanford Politeness Dataset\n\nIf you use the test data from the 9 target languages, please cite our paper"
] |
c08c13295ebd8111ab96879ceba43b99ec28afdb | # Dataset Card for "SemanticScholarAbstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | KaiserML/SemanticScholarAbstracts | [
"region:us"
] | 2022-11-11T02:45:31+00:00 | {"dataset_info": {"features": [{"name": "corpusid", "dtype": "int64"}, {"name": "openaccessinfo", "struct": [{"name": "externalids", "struct": [{"name": "ACL", "dtype": "string"}, {"name": "ArXiv", "dtype": "string"}, {"name": "DOI", "dtype": "string"}, {"name": "MAG", "dtype": "string"}, {"name": "PubMedCentral", "dtype": "string"}]}, {"name": "license", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "abstract", "dtype": "string"}, {"name": "updated", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59461773143.463005, "num_examples": 48314588}], "download_size": 37596463269, "dataset_size": 59461773143.463005}} | 2022-11-11T03:47:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SemanticScholarAbstracts"
More Information needed | [
"# Dataset Card for \"SemanticScholarAbstracts\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SemanticScholarAbstracts\"\n\nMore Information needed"
] |
8491055a606b9b1ec690b39e36fbdf1fddb4c4bc | # Dataset Card for "lab"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | claytonsamples/lab | [
"region:us"
] | 2022-11-11T02:53:57+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "kimwipes", "1": "nitrile gloves", "2": "petri dish", "3": "serological pipette"}}}}], "splits": [{"name": "train", "num_bytes": 22915125.09, "num_examples": 1415}], "download_size": 19042401, "dataset_size": 22915125.09}} | 2022-11-11T02:58:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lab"
More Information needed | [
"# Dataset Card for \"lab\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lab\"\n\nMore Information needed"
] |
13462a42e3375e80a9bd46a64c58e1bcaba77874 | # Dataset Card for "rvl_cdip_400_train_val_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Format
````
DatasetDict({
test: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 1600
})
train: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 6400
})
validation: Dataset({
features: ['image', 'label', 'ground_truth'],
num_rows: 1600
})
})
```` | jinhybr/rvl_cdip_400_train_val_test | [
"region:us"
] | 2022-11-11T04:01:53+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "letter", "1": "form", "2": "email", "3": "handwritten", "4": "advertisement", "5": "scientific report", "6": "scientific publication", "7": "specification", "8": "file folder", "9": "news article", "10": "budget", "11": "invoice", "12": "presentation", "13": "questionnaire", "14": "resume", "15": "memo"}}}}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 197669272.0, "num_examples": 1600}, {"name": "train", "num_bytes": 781258280.0, "num_examples": 6400}, {"name": "validation", "num_bytes": 191125740.0, "num_examples": 1600}], "download_size": 1101475597, "dataset_size": 1170053292.0}} | 2022-11-11T15:58:02+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "rvl_cdip_400_train_val_test"
More Information needed
### Dataset Format
' | [
"# Dataset Card for \"rvl_cdip_400_train_val_test\"\n\nMore Information needed",
"### Dataset Format\n\n'"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"rvl_cdip_400_train_val_test\"\n\nMore Information needed",
"### Dataset Format\n\n'"
] |
091aec1e2384a20b2b36eb96177755ca13dd0b42 | The small 20K version of the Pubmed-RCT dataset by Dernoncourt et al (2017).
```
@article{dernoncourt2017pubmed,
title={Pubmed 200k rct: a dataset for sequential sentence classification in medical abstracts},
author={Dernoncourt, Franck and Lee, Ji Young},
journal={arXiv preprint arXiv:1710.06071},
year={2017}
}
```
Note: This is the cleaned up version by Jin and Szolovits (2018).
```
@article{jin2018hierarchical,
title={Hierarchical neural networks for sequential sentence classification in medical scientific abstracts},
author={Jin, Di and Szolovits, Peter},
journal={arXiv preprint arXiv:1808.06161},
year={2018}
}
``` | armanc/pubmed-rct20k | [
"region:us"
] | 2022-11-11T04:20:56+00:00 | {} | 2022-11-11T08:23:24+00:00 | [] | [] | TAGS
#region-us
| The small 20K version of the Pubmed-RCT dataset by Dernoncourt et al (2017).
Note: This is the cleaned up version by Jin and Szolovits (2018).
| [] | [
"TAGS\n#region-us \n"
] |
d9327e0fa300d66c0c577330a624a39626f1192e | This is the ScientificQA dataset by Saikh et al (2022).
```
@article{10.1007/s00799-022-00329-y,
author = {Saikh, Tanik and Ghosal, Tirthankar and Mittal, Amish and Ekbal, Asif and Bhattacharyya, Pushpak},
title = {ScienceQA: A Novel Resource for Question Answering on Scholarly Articles},
year = {2022},
journal = {Int. J. Digit. Libr.},
month = {sep}
}
| armanc/ScienceQA | [
"region:us"
] | 2022-11-11T05:03:56+00:00 | {} | 2022-11-11T08:34:35+00:00 | [] | [] | TAGS
#region-us
| This is the ScientificQA dataset by Saikh et al (2022).
'''
@article{10.1007/s00799-022-00329-y,
author = {Saikh, Tanik and Ghosal, Tirthankar and Mittal, Amish and Ekbal, Asif and Bhattacharyya, Pushpak},
title = {ScienceQA: A Novel Resource for Question Answering on Scholarly Articles},
year = {2022},
journal = {Int. J. Digit. Libr.},
month = {sep}
}
| [] | [
"TAGS\n#region-us \n"
] |
6c6b552186533303a3f2153e6cd2b931ba8e2434 |
# Dataset Card for IDK-MRC
## Dataset Description
- **Repository:** [rifkiaputri/IDK-MRC](https://github.com/rifkiaputri/IDK-MRC)
- **Paper:** [PDF](https://aclanthology.org/2022.emnlp-main.465/)
- **Point of Contact:** [rifkiaputri](https://github.com/rifkiaputri)
### Dataset Summary
I(n)dontKnow-MRC (IDK-MRC) is an Indonesian Machine Reading Comprehension dataset that covers answerable and unanswerable questions. Based on the combination of the existing answerable questions in TyDiQA, the new unanswerable question in IDK-MRC is generated using a question generation model and human-written question. Each paragraph in the dataset has a set of answerable and unanswerable questions with the corresponding answer.
### Supported Tasks
IDK-MRC is mainly intended to train Machine Reading Comprehension or extractive QA models.
### Languages
Indonesian
## Dataset Structure
### Data Instances
```
{
"context": "Para ilmuwan menduga bahwa megalodon terlihat seperti hiu putih yang lebih kekar, walaupun hiu ini juga mungkin tampak seperti hiu raksasa (Cetorhinus maximus) atau hiu harimau-pasir (Carcharias taurus). Hewan ini dianggap sebagai salah satu predator terbesar dan terkuat yang pernah ada, dan fosil-fosilnya sendiri menunjukkan bahwa panjang maksimal hiu raksasa ini mencapai 18 m, sementara rata-rata panjangnya berkisar pada angka 10,5 m. Rahangnya yang besar memiliki kekuatan gigitan antara 110.000 hingga 180.000 newton. Gigi mereka tebal dan kuat, dan telah berevolusi untuk menangkap mangsa dan meremukkan tulang.",
"qas":
[
{
"id": "indonesian--6040202845759439489-1",
"is_impossible": false,
"question": "Apakah jenis hiu terbesar di dunia ?",
"answers":
[
{
"text": "megalodon",
"answer_start": 27
}
]
},
{
"id": "indonesian-0426116372962619813-unans-h-2",
"is_impossible": true,
"question": "Apakah jenis hiu terkecil di dunia?",
"answers":
[]
},
{
"id": "indonesian-2493757035872656854-unans-h-2",
"is_impossible": true,
"question": "Apakah jenis hiu betina terbesar di dunia?",
"answers":
[]
}
]
}
```
### Data Fields
Each instance has several fields:
- `context`: context passage/paragraph as a string
- `qas`: list of questions related to the `context`
- `id`: question ID as a string
- `is_impossible`: whether the question is unanswerable (impossible to answer) or not as a boolean
- `question`: question as a string
- `answers`: list of answers
- `text`: answer as a string
- `answer_start`: answer start index as an integer
### Data Splits
- `train`: 9,332 (5,042 answerable, 4,290 unanswerable)
- `valid`: 764 (382 answerable, 382 unanswerable)
- `test`: 844 (422 answerable, 422 unanswerable)
## Dataset Creation
### Curation Rationale
IDK-MRC dataset is built based on the existing paragraph and answerable questions (ans) in TyDiQA-GoldP (Clark et al., 2020). The new unanswerable questions are automatically generated using the combination of mT5 (Xue et al., 2021) and XLM-R (Conneau et al., 2020) models, which are then manually verified by human annotators (filtered ans and filtered unans). We also asked the annotators to manually write additional unanswerable questions as described in §3.3 (additional unans). Each paragraphs in the final dataset will have a set of filtered ans, filtered unans, and additional unans questions.
### Annotations
#### Annotation process
In our dataset collection pipeline, the annotators are asked to validate the model-generated unanswerable questions and write a new additional unanswerable questions.
#### Who are the annotators?
We recruit four annotators with 2+ years of experience in Indonesian NLP annotation using direct recruitment. All of them are Indonesian native speakers who reside in Indonesia (Java Island) and fall under the 18–34 age category. We set the payment to around $7.5 per hour. Given the annotators’ demographic, we ensure that the payment is above the minimum wage rate (as of December 2021). All annotators also have signed the consent form and agreed to participate in this project.
## Considerations for Using the Data
The paragraphs and answerable questions that we utilized to build IDK-MRC dataset are taken from Indonesian subset of TyDiQA-GoldP dataset (Clark et al., 2020), which originates from Wikipedia articles. Since those articles are written from a neutral point of view, the risk of harmful content is minimal. Also, all model-generated questions in our dataset have been validated by human annotators to eliminate the risk of harmful questions. During the manual question generation process, the annotators are also encouraged to avoid producing possibly offensive questions.
Even so, we argue that further assessment is needed before using our dataset and models in real-world applications. This measurement is especially required for the pre-trained language models used in our experiments, namely mT5 (Xue et al., 2021), IndoBERT (Wilie et al., 2020), mBERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020). These language models are mostly pre-trained on the common-crawl dataset, which may contain harmful biases or stereotypes.
## Additional Information
### Licensing Information
CC BY-SA 4.0
### Citation Information
```bibtex
@inproceedings{putri-oh-2022-idk,
title = "{IDK}-{MRC}: Unanswerable Questions for {I}ndonesian Machine Reading Comprehension",
author = "Putri, Rifki Afina and
Oh, Alice",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.465",
pages = "6918--6933",
}
```
| rifkiaputri/idk-mrc | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:machine-generated",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended|tydiqa",
"language:id",
"license:cc-by-4.0",
"region:us"
] | 2022-11-11T05:56:43+00:00 | {"annotations_creators": ["machine-generated", "expert-generated"], "language_creators": ["machine-generated", "expert-generated"], "language": ["id"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["extended|tydiqa"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "IDK-MRC", "tags": []} | 2023-05-23T06:43:23+00:00 | [] | [
"id"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|tydiqa #language-Indonesian #license-cc-by-4.0 #region-us
|
# Dataset Card for IDK-MRC
## Dataset Description
- Repository: rifkiaputri/IDK-MRC
- Paper: PDF
- Point of Contact: rifkiaputri
### Dataset Summary
I(n)dontKnow-MRC (IDK-MRC) is an Indonesian Machine Reading Comprehension dataset that covers answerable and unanswerable questions. Based on the combination of the existing answerable questions in TyDiQA, the new unanswerable question in IDK-MRC is generated using a question generation model and human-written question. Each paragraph in the dataset has a set of answerable and unanswerable questions with the corresponding answer.
### Supported Tasks
IDK-MRC is mainly intended to train Machine Reading Comprehension or extractive QA models.
### Languages
Indonesian
## Dataset Structure
### Data Instances
### Data Fields
Each instance has several fields:
- 'context': context passage/paragraph as a string
- 'qas': list of questions related to the 'context'
- 'id': question ID as a string
- 'is_impossible': whether the question is unanswerable (impossible to answer) or not as a boolean
- 'question': question as a string
- 'answers': list of answers
- 'text': answer as a string
- 'answer_start': answer start index as an integer
### Data Splits
- 'train': 9,332 (5,042 answerable, 4,290 unanswerable)
- 'valid': 764 (382 answerable, 382 unanswerable)
- 'test': 844 (422 answerable, 422 unanswerable)
## Dataset Creation
### Curation Rationale
IDK-MRC dataset is built based on the existing paragraph and answerable questions (ans) in TyDiQA-GoldP (Clark et al., 2020). The new unanswerable questions are automatically generated using the combination of mT5 (Xue et al., 2021) and XLM-R (Conneau et al., 2020) models, which are then manually verified by human annotators (filtered ans and filtered unans). We also asked the annotators to manually write additional unanswerable questions as described in §3.3 (additional unans). Each paragraphs in the final dataset will have a set of filtered ans, filtered unans, and additional unans questions.
### Annotations
#### Annotation process
In our dataset collection pipeline, the annotators are asked to validate the model-generated unanswerable questions and write a new additional unanswerable questions.
#### Who are the annotators?
We recruit four annotators with 2+ years of experience in Indonesian NLP annotation using direct recruitment. All of them are Indonesian native speakers who reside in Indonesia (Java Island) and fall under the 18–34 age category. We set the payment to around $7.5 per hour. Given the annotators’ demographic, we ensure that the payment is above the minimum wage rate (as of December 2021). All annotators also have signed the consent form and agreed to participate in this project.
## Considerations for Using the Data
The paragraphs and answerable questions that we utilized to build IDK-MRC dataset are taken from Indonesian subset of TyDiQA-GoldP dataset (Clark et al., 2020), which originates from Wikipedia articles. Since those articles are written from a neutral point of view, the risk of harmful content is minimal. Also, all model-generated questions in our dataset have been validated by human annotators to eliminate the risk of harmful questions. During the manual question generation process, the annotators are also encouraged to avoid producing possibly offensive questions.
Even so, we argue that further assessment is needed before using our dataset and models in real-world applications. This measurement is especially required for the pre-trained language models used in our experiments, namely mT5 (Xue et al., 2021), IndoBERT (Wilie et al., 2020), mBERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020). These language models are mostly pre-trained on the common-crawl dataset, which may contain harmful biases or stereotypes.
## Additional Information
### Licensing Information
CC BY-SA 4.0
| [
"# Dataset Card for IDK-MRC",
"## Dataset Description\n\n- Repository: rifkiaputri/IDK-MRC\n- Paper: PDF\n- Point of Contact: rifkiaputri",
"### Dataset Summary\n\nI(n)dontKnow-MRC (IDK-MRC) is an Indonesian Machine Reading Comprehension dataset that covers answerable and unanswerable questions. Based on the combination of the existing answerable questions in TyDiQA, the new unanswerable question in IDK-MRC is generated using a question generation model and human-written question. Each paragraph in the dataset has a set of answerable and unanswerable questions with the corresponding answer.",
"### Supported Tasks\n\nIDK-MRC is mainly intended to train Machine Reading Comprehension or extractive QA models.",
"### Languages\n\nIndonesian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\nEach instance has several fields:\n\n- 'context': context passage/paragraph as a string\n- 'qas': list of questions related to the 'context'\n - 'id': question ID as a string\n - 'is_impossible': whether the question is unanswerable (impossible to answer) or not as a boolean\n - 'question': question as a string\n - 'answers': list of answers\n - 'text': answer as a string\n - 'answer_start': answer start index as an integer",
"### Data Splits\n\n- 'train': 9,332 (5,042 answerable, 4,290 unanswerable)\n- 'valid': 764 (382 answerable, 382 unanswerable)\n- 'test': 844 (422 answerable, 422 unanswerable)",
"## Dataset Creation",
"### Curation Rationale\n\nIDK-MRC dataset is built based on the existing paragraph and answerable questions (ans) in TyDiQA-GoldP (Clark et al., 2020). The new unanswerable questions are automatically generated using the combination of mT5 (Xue et al., 2021) and XLM-R (Conneau et al., 2020) models, which are then manually verified by human annotators (filtered ans and filtered unans). We also asked the annotators to manually write additional unanswerable questions as described in §3.3 (additional unans). Each paragraphs in the final dataset will have a set of filtered ans, filtered unans, and additional unans questions.",
"### Annotations",
"#### Annotation process\n\nIn our dataset collection pipeline, the annotators are asked to validate the model-generated unanswerable questions and write a new additional unanswerable questions.",
"#### Who are the annotators?\n\nWe recruit four annotators with 2+ years of experience in Indonesian NLP annotation using direct recruitment. All of them are Indonesian native speakers who reside in Indonesia (Java Island) and fall under the 18–34 age category. We set the payment to around $7.5 per hour. Given the annotators’ demographic, we ensure that the payment is above the minimum wage rate (as of December 2021). All annotators also have signed the consent form and agreed to participate in this project.",
"## Considerations for Using the Data\n\nThe paragraphs and answerable questions that we utilized to build IDK-MRC dataset are taken from Indonesian subset of TyDiQA-GoldP dataset (Clark et al., 2020), which originates from Wikipedia articles. Since those articles are written from a neutral point of view, the risk of harmful content is minimal. Also, all model-generated questions in our dataset have been validated by human annotators to eliminate the risk of harmful questions. During the manual question generation process, the annotators are also encouraged to avoid producing possibly offensive questions.\n\nEven so, we argue that further assessment is needed before using our dataset and models in real-world applications. This measurement is especially required for the pre-trained language models used in our experiments, namely mT5 (Xue et al., 2021), IndoBERT (Wilie et al., 2020), mBERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020). These language models are mostly pre-trained on the common-crawl dataset, which may contain harmful biases or stereotypes.",
"## Additional Information",
"### Licensing Information\n\nCC BY-SA 4.0"
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-machine-generated #annotations_creators-expert-generated #language_creators-machine-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended|tydiqa #language-Indonesian #license-cc-by-4.0 #region-us \n",
"# Dataset Card for IDK-MRC",
"## Dataset Description\n\n- Repository: rifkiaputri/IDK-MRC\n- Paper: PDF\n- Point of Contact: rifkiaputri",
"### Dataset Summary\n\nI(n)dontKnow-MRC (IDK-MRC) is an Indonesian Machine Reading Comprehension dataset that covers answerable and unanswerable questions. Based on the combination of the existing answerable questions in TyDiQA, the new unanswerable question in IDK-MRC is generated using a question generation model and human-written question. Each paragraph in the dataset has a set of answerable and unanswerable questions with the corresponding answer.",
"### Supported Tasks\n\nIDK-MRC is mainly intended to train Machine Reading Comprehension or extractive QA models.",
"### Languages\n\nIndonesian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\nEach instance has several fields:\n\n- 'context': context passage/paragraph as a string\n- 'qas': list of questions related to the 'context'\n - 'id': question ID as a string\n - 'is_impossible': whether the question is unanswerable (impossible to answer) or not as a boolean\n - 'question': question as a string\n - 'answers': list of answers\n - 'text': answer as a string\n - 'answer_start': answer start index as an integer",
"### Data Splits\n\n- 'train': 9,332 (5,042 answerable, 4,290 unanswerable)\n- 'valid': 764 (382 answerable, 382 unanswerable)\n- 'test': 844 (422 answerable, 422 unanswerable)",
"## Dataset Creation",
"### Curation Rationale\n\nIDK-MRC dataset is built based on the existing paragraph and answerable questions (ans) in TyDiQA-GoldP (Clark et al., 2020). The new unanswerable questions are automatically generated using the combination of mT5 (Xue et al., 2021) and XLM-R (Conneau et al., 2020) models, which are then manually verified by human annotators (filtered ans and filtered unans). We also asked the annotators to manually write additional unanswerable questions as described in §3.3 (additional unans). Each paragraphs in the final dataset will have a set of filtered ans, filtered unans, and additional unans questions.",
"### Annotations",
"#### Annotation process\n\nIn our dataset collection pipeline, the annotators are asked to validate the model-generated unanswerable questions and write a new additional unanswerable questions.",
"#### Who are the annotators?\n\nWe recruit four annotators with 2+ years of experience in Indonesian NLP annotation using direct recruitment. All of them are Indonesian native speakers who reside in Indonesia (Java Island) and fall under the 18–34 age category. We set the payment to around $7.5 per hour. Given the annotators’ demographic, we ensure that the payment is above the minimum wage rate (as of December 2021). All annotators also have signed the consent form and agreed to participate in this project.",
"## Considerations for Using the Data\n\nThe paragraphs and answerable questions that we utilized to build IDK-MRC dataset are taken from Indonesian subset of TyDiQA-GoldP dataset (Clark et al., 2020), which originates from Wikipedia articles. Since those articles are written from a neutral point of view, the risk of harmful content is minimal. Also, all model-generated questions in our dataset have been validated by human annotators to eliminate the risk of harmful questions. During the manual question generation process, the annotators are also encouraged to avoid producing possibly offensive questions.\n\nEven so, we argue that further assessment is needed before using our dataset and models in real-world applications. This measurement is especially required for the pre-trained language models used in our experiments, namely mT5 (Xue et al., 2021), IndoBERT (Wilie et al., 2020), mBERT (Devlin et al., 2019), and XLM-R (Conneau et al., 2020). These language models are mostly pre-trained on the common-crawl dataset, which may contain harmful biases or stereotypes.",
"## Additional Information",
"### Licensing Information\n\nCC BY-SA 4.0"
] |
22d6519d033e3b433daadb49b7fd258dc8c9d3e3 | # Dataset Card for "bayc"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sdotmac/bayc | [
"region:us"
] | 2022-11-11T06:08:14+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 381887922.0, "num_examples": 10000}], "download_size": 378097332, "dataset_size": 381887922.0}} | 2022-11-12T05:19:59+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bayc"
More Information needed | [
"# Dataset Card for \"bayc\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bayc\"\n\nMore Information needed"
] |
94c29b56186e07b267d8ae2610e94e7c8642048d |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| bgstud/libri-whisper-raw | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-11-11T10:03:50+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["token-classification-other-acronym-identification"], "paperswithcode_id": "acronym-identification", "pretty_name": "Acronym Identification Dataset", "train-eval-index": [{"col_mapping": {"labels": "tags", "tokens": "tokens"}, "config": "default", "splits": {"eval_split": "test"}, "task": "token-classification", "task_id": "entity_extraction"}]} | 2022-11-11T10:12:24+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
f807452b98a80d2daf6b7e84f4a8a55bec9b0d16 |
# Dataset Card for "lmqg/qag_tweetqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": "I would hope that Phylicia Rashad would apologize now that @missjillscott has! You cannot discount 30 victims who come with similar stories.— JDWhitner (@JDWhitner) July 7, 2015",
"questions": [ "what should phylicia rashad do now?", "how many victims have come forward?" ],
"answers": [ "apologize", "30" ],
"questions_answers": "Q: what should phylicia rashad do now?, A: apologize Q: how many victims have come forward?, A: 30"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|4536 | 583| 583|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qag_tweetqa | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:tweet_qa",
"language:en",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-11-11T11:11:25+00:00 | {"language": "en", "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10K", "source_datasets": "tweet_qa", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "TweetQA for question generation", "tags": ["question-generation"]} | 2022-12-02T19:16:46+00:00 | [
"2210.03992"
] | [
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #multilinguality-monolingual #size_categories-1k<n<10K #source_datasets-tweet_qa #language-English #license-cc-by-sa-4.0 #question-generation #arxiv-2210.03992 #region-us
| Dataset Card for "lmqg/qag\_tweetqa"
====================================
Dataset Description
-------------------
* Repository: URL
* Paper: URL
* Point of Contact: Asahi Ushio
### Dataset Summary
This is the question & answer generation dataset based on the tweet\_qa. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
### Supported Tasks and Leaderboards
* 'question-answer-generation': The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
Dataset Structure
-----------------
An example of 'train' looks as follows.
The data fields are the same among all splits.
* 'questions': a 'list' of 'string' features.
* 'answers': a 'list' of 'string' features.
* 'paragraph': a 'string' feature.
* 'questions\_answers': a 'string' feature.
Data Splits
-----------
| [
"### Dataset Summary\n\n\nThis is the question & answer generation dataset based on the tweet\\_qa. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answer-generation': The dataset is assumed to be used to train a model for question & answer generation.\nSuccess on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------\n\n\nAn example of 'train' looks as follows.\n\n\nThe data fields are the same among all splits.\n\n\n* 'questions': a 'list' of 'string' features.\n* 'answers': a 'list' of 'string' features.\n* 'paragraph': a 'string' feature.\n* 'questions\\_answers': a 'string' feature.\n\n\nData Splits\n-----------"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #multilinguality-monolingual #size_categories-1k<n<10K #source_datasets-tweet_qa #language-English #license-cc-by-sa-4.0 #question-generation #arxiv-2210.03992 #region-us \n",
"### Dataset Summary\n\n\nThis is the question & answer generation dataset based on the tweet\\_qa. The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answer-generation': The dataset is assumed to be used to train a model for question & answer generation.\nSuccess on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------\n\n\nAn example of 'train' looks as follows.\n\n\nThe data fields are the same among all splits.\n\n\n* 'questions': a 'list' of 'string' features.\n* 'answers': a 'list' of 'string' features.\n* 'paragraph': a 'string' feature.\n* 'questions\\_answers': a 'string' feature.\n\n\nData Splits\n-----------"
] |
d1f2a4184134247fa0fbd5db8d7324ef8792c6f8 |
# Dataset Card for "lmqg/qag_squad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the SQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": "\"4 Minutes\" was released as the album's lead single and peaked at number three on the Billboard Hot 100. It was Madonna's 37th top-ten hit on the chart—it pushed Madonna past Elvis Presley as the artist with the most top-ten hits. In the UK she retained her record for the most number-one singles for a female artist; \"4 Minutes\" becoming her thirteenth. At the 23rd Japan Gold Disc Awards, Madonna received her fifth Artist of the Year trophy from Recording Industry Association of Japan, the most for any artist. To further promote the album, Madonna embarked on the Sticky & Sweet Tour; her first major venture with Live Nation. With a gross of $280 million, it became the highest-grossing tour by a solo artist then, surpassing the previous record Madonna set with the Confessions Tour; it was later surpassed by Roger Waters' The Wall Live. It was extended to the next year, adding new European dates, and after it ended, the total gross was $408 million.",
"questions": [
"Which single was released as the album's lead single?",
"Madonna surpassed which artist with the most top-ten hits?",
"4 minutes became Madonna's which number one single in the UK?",
"What is the name of the first tour with Live Nation?",
"How much did Stick and Sweet Tour grossed?"
],
"answers": [
"4 Minutes",
"Elvis Presley",
"thirteenth",
"Sticky & Sweet Tour",
"$280 million,"
],
"questions_answers": "question: Which single was released as the album's lead single?, answer: 4 Minutes | question: Madonna surpassed which artist with the most top-ten hits?, answer: Elvis Presley | question: 4 minutes became Madonna's which number one single in the UK?, answer: thirteenth | question: What is the name of the first tour with Live Nation?, answer: Sticky & Sweet Tour | question: How much did Stick and Sweet Tour grossed?, answer: $280 million,"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|16462| 2067 | 2429|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qag_squad | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_squad",
"language:en",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
] | 2022-11-11T14:12:30+00:00 | {"language": "en", "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10K", "source_datasets": "lmqg/qg_squad", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "SQuAD for question generation", "tags": ["question-generation"]} | 2022-12-18T07:39:03+00:00 | [
"2210.03992"
] | [
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #multilinguality-monolingual #size_categories-1k<n<10K #source_datasets-lmqg/qg_squad #language-English #license-cc-by-sa-4.0 #question-generation #arxiv-2210.03992 #region-us
| Dataset Card for "lmqg/qag\_squad"
==================================
Dataset Description
-------------------
* Repository: URL
* Paper: URL
* Point of Contact: Asahi Ushio
### Dataset Summary
This is the question & answer generation dataset based on the SQuAD.
### Supported Tasks and Leaderboards
* 'question-answer-generation': The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
English (en)
Dataset Structure
-----------------
An example of 'train' looks as follows.
The data fields are the same among all splits.
* 'questions': a 'list' of 'string' features.
* 'answers': a 'list' of 'string' features.
* 'paragraph': a 'string' feature.
* 'questions\_answers': a 'string' feature.
Data Splits
-----------
| [
"### Dataset Summary\n\n\nThis is the question & answer generation dataset based on the SQuAD.",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answer-generation': The dataset is assumed to be used to train a model for question & answer generation.\nSuccess on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------\n\n\nAn example of 'train' looks as follows.\n\n\nThe data fields are the same among all splits.\n\n\n* 'questions': a 'list' of 'string' features.\n* 'answers': a 'list' of 'string' features.\n* 'paragraph': a 'string' feature.\n* 'questions\\_answers': a 'string' feature.\n\n\nData Splits\n-----------"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #multilinguality-monolingual #size_categories-1k<n<10K #source_datasets-lmqg/qg_squad #language-English #license-cc-by-sa-4.0 #question-generation #arxiv-2210.03992 #region-us \n",
"### Dataset Summary\n\n\nThis is the question & answer generation dataset based on the SQuAD.",
"### Supported Tasks and Leaderboards\n\n\n* 'question-answer-generation': The dataset is assumed to be used to train a model for question & answer generation.\nSuccess on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).",
"### Languages\n\n\nEnglish (en)\n\n\nDataset Structure\n-----------------\n\n\nAn example of 'train' looks as follows.\n\n\nThe data fields are the same among all splits.\n\n\n* 'questions': a 'list' of 'string' features.\n* 'answers': a 'list' of 'string' features.\n* 'paragraph': a 'string' feature.\n* 'questions\\_answers': a 'string' feature.\n\n\nData Splits\n-----------"
] |
a50653db5bd9fcf01aa163087e2974ba1388f8da | # Dataset Card for "lolita-dress-CHIN"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zhangxinran/lolita-dress-CHIN | [
"region:us"
] | 2022-11-11T17:36:11+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 704987635.0, "num_examples": 993}], "download_size": 701091143, "dataset_size": 704987635.0}} | 2022-11-11T22:34:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lolita-dress-CHIN"
More Information needed | [
"# Dataset Card for \"lolita-dress-CHIN\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lolita-dress-CHIN\"\n\nMore Information needed"
] |
b4c532908d2439912f9d6d9e0d9d14f8cad898f9 | # Dataset Card for "FewShotSGD"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/FewShotSGD | [
"region:us"
] | 2022-11-11T19:11:43+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "test", "num_bytes": 7583282, "num_examples": 15537}, {"name": "train", "num_bytes": 46458280, "num_examples": 83391}, {"name": "validation", "num_bytes": 6337305, "num_examples": 11960}], "download_size": 6517762, "dataset_size": 60378867}} | 2022-11-11T19:12:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "FewShotSGD"
More Information needed | [
"# Dataset Card for \"FewShotSGD\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"FewShotSGD\"\n\nMore Information needed"
] |
ac8a3794d6fb430352b38c459b1d77f49f154b60 | # Dataset Card for "olm-wikipedia-20221101-kl-language"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-wikipedia-20221101-kl-language | [
"region:us"
] | 2022-11-11T19:32:29+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 311164, "num_examples": 297}], "download_size": 191198, "dataset_size": 311164}} | 2022-11-11T19:32:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "olm-wikipedia-20221101-kl-language"
More Information needed | [
"# Dataset Card for \"olm-wikipedia-20221101-kl-language\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"olm-wikipedia-20221101-kl-language\"\n\nMore Information needed"
] |
c755ed348bdb9c918a0ea6a316d9e0f92ec60de6 | # AutoTrain Dataset for project: tweet-es-sent
## Dataset Description
This dataset has been automatically processed by AutoTrain for project tweet-es-sent.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"target": 1,
"text": "1sola vuelta! arauz presidente! 1sola vuelta! todo 1 1sola la 1 es ecdor! por ti!1 por 1 los tuyos!1 por nosotros juntos1 mas de 45 d apoyo popular el 7 se vota 1por la vida por el futuro,por la esperanza guayaquil ec dor es 1"
},
{
"target": 1,
"text": "excelente decisi\u00f3n , las mujeres son importantes y por esa raz\u00f3n, a productos de primera necesidad hay que quitarles el iva "
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"target": "ClassLabel(num_classes=3, names=['0', '1', '2'], id=None)",
"text": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 12400 |
| valid | 3685 |
| erickdp/autotrain-data-tweet-es-sent | [
"task_categories:text-classification",
"region:us"
] | 2022-11-11T21:02:59+00:00 | {"task_categories": ["text-classification"]} | 2022-11-14T09:01:25+00:00 | [] | [] | TAGS
#task_categories-text-classification #region-us
| AutoTrain Dataset for project: tweet-es-sent
============================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project tweet-es-sent.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
427f6aed010aaa987762d262ce4204444343da4e | # Dataset Card for "SGD_Restaurants"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Restaurants | [
"region:us"
] | 2022-11-11T21:15:54+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2860131.272837388, "num_examples": 9906}, {"name": "test", "num_bytes": 163, "num_examples": 1}], "download_size": 1155851, "dataset_size": 2860294.272837388}} | 2023-03-21T20:51:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Restaurants"
More Information needed | [
"# Dataset Card for \"SGD_Restaurants\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Restaurants\"\n\nMore Information needed"
] |
32825c9d2ebb9e0e1955f2692361c2544f09f407 | # Dataset Card for "SGD_Media"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Media | [
"region:us"
] | 2022-11-11T21:16:07+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1404611.2159709618, "num_examples": 6060}, {"name": "test", "num_bytes": 330, "num_examples": 1}], "download_size": 529801, "dataset_size": 1404941.2159709618}} | 2023-03-21T20:51:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Media"
More Information needed | [
"# Dataset Card for \"SGD_Media\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Media\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.