sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
5a870df7041aa538bee06dd4ece2dddd926c44a1
# Dataset Card for Nexdata/Multi-race_7_Expressions_Recognition_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/973?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 25,998 People Multi-race 7 Expressions Recognition Data. The data includes male and female. The age distribution ranges from child to the elderly, the young people and the middle aged are the majorities. For each person, 7 images were collected. The data diversity includes different facial postures, different expressions, different light conditions and different scenes. The data can be used for tasks such as face expression recognition. For more details, please refer to the link: https://www.nexdata.ai/datasets/973?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/Multi-race_7_Expressions_Recognition_Data
[ "region:us" ]
2022-06-27T07:43:50+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2024-02-04T10:06:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/Multi-race_7_Expressions_Recognition_Data ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 25,998 People Multi-race 7 Expressions Recognition Data. The data includes male and female. The age distribution ranges from child to the elderly, the young people and the middle aged are the majorities. For each person, 7 images were collected. The data diversity includes different facial postures, different expressions, different light conditions and different scenes. The data can be used for tasks such as face expression recognition. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/Multi-race_7_Expressions_Recognition_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n25,998 People Multi-race 7 Expressions Recognition Data. The data includes male and female. The age distribution ranges from child to the elderly, the young people and the middle aged are the majorities. For each person, 7 images were collected. The data diversity includes different facial postures, different expressions, different light conditions and different scenes. The data can be used for tasks such as face expression recognition.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/Multi-race_7_Expressions_Recognition_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n25,998 People Multi-race 7 Expressions Recognition Data. The data includes male and female. The age distribution ranges from child to the elderly, the young people and the middle aged are the majorities. For each person, 7 images were collected. The data diversity includes different facial postures, different expressions, different light conditions and different scenes. The data can be used for tasks such as face expression recognition.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 24, 125, 25, 111, 34, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/Multi-race_7_Expressions_Recognition_Data## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n25,998 People Multi-race 7 Expressions Recognition Data. The data includes male and female. The age distribution ranges from child to the elderly, the young people and the middle aged are the majorities. For each person, 7 images were collected. The data diversity includes different facial postures, different expressions, different light conditions and different scenes. The data can be used for tasks such as face expression recognition.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.### Languages\n\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
921c5035748e32774042b7b4a9e4676af2c94295
# Dataset Card for Nexdata/50_Types_of_Dynamic_Gesture_Recognition_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/972?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 558,870 Videos - 50 Types of Dynamic Gesture Recognition Data. The collecting scenes of this dataset include indoor scenes and outdoor scenes (natural scenery, street view, square, etc.). The data covers males and females (Chinese). The age distribution ranges from teenager to senior. The data diversity includes multiple scenes, 50 types of dynamic gestures, 5 photographic angles, multiple light conditions, different photographic distances. This data can be used for dynamic gesture recognition of smart homes, audio equipments and on-board systems. For more details, please refer to the link: https://www.nexdata.ai/datasets/972?source=Huggingface ### Supported Tasks and Leaderboards object-detection, computer-vision: The dataset can be used to train a model for object detection. ### Languages ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/50_Types_of_Dynamic_Gesture_Recognition_Data
[ "region:us" ]
2022-06-27T07:45:20+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2023-08-31T01:41:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/50_Types_of_Dynamic_Gesture_Recognition_Data ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 558,870 Videos - 50 Types of Dynamic Gesture Recognition Data. The collecting scenes of this dataset include indoor scenes and outdoor scenes (natural scenery, street view, square, etc.). The data covers males and females (Chinese). The age distribution ranges from teenager to senior. The data diversity includes multiple scenes, 50 types of dynamic gestures, 5 photographic angles, multiple light conditions, different photographic distances. This data can be used for dynamic gesture recognition of smart homes, audio equipments and on-board systems. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards object-detection, computer-vision: The dataset can be used to train a model for object detection. ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/50_Types_of_Dynamic_Gesture_Recognition_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n558,870 Videos - 50 Types of Dynamic Gesture Recognition Data. The collecting scenes of this dataset include indoor scenes and outdoor scenes (natural scenery, street view, square, etc.). The data covers males and females (Chinese). The age distribution ranges from teenager to senior. The data diversity includes multiple scenes, 50 types of dynamic gestures, 5 photographic angles, multiple light conditions, different photographic distances. This data can be used for dynamic gesture recognition of smart homes, audio equipments and on-board systems.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train a model for object detection.", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/50_Types_of_Dynamic_Gesture_Recognition_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n558,870 Videos - 50 Types of Dynamic Gesture Recognition Data. The collecting scenes of this dataset include indoor scenes and outdoor scenes (natural scenery, street view, square, etc.). The data covers males and females (Chinese). The age distribution ranges from teenager to senior. The data diversity includes multiple scenes, 50 types of dynamic gestures, 5 photographic angles, multiple light conditions, different photographic distances. This data can be used for dynamic gesture recognition of smart homes, audio equipments and on-board systems.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train a model for object detection.", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 28, 125, 25, 146, 34, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/50_Types_of_Dynamic_Gesture_Recognition_Data## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n558,870 Videos - 50 Types of Dynamic Gesture Recognition Data. The collecting scenes of this dataset include indoor scenes and outdoor scenes (natural scenery, street view, square, etc.). The data covers males and females (Chinese). The age distribution ranges from teenager to senior. The data diversity includes multiple scenes, 50 types of dynamic gestures, 5 photographic angles, multiple light conditions, different photographic distances. This data can be used for dynamic gesture recognition of smart homes, audio equipments and on-board systems.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train a model for object detection.### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL" ]
2b05c42473e7f983ebbae7efbc1c446f2c754749
# Dataset Card for Nexdata/Multi-race_and_Multi-pose_Face_Images_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1016?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 23,110 People Multi-race and Multi-pose Face Images Data. This data includes Asian race, Caucasian race, black race, brown race and Indians. Each subject were collected 29 images under different scenes and light conditions. The 29 images include 28 photos (multi light conditions, multiple poses and multiple scenes) + 1 ID photo. This data can be used for face recognition related tasks. For more details, please refer to the link: https://www.nexdata.ai/datasets/1016?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/Multi-race_and_Multi-pose_Face_Images_Data
[ "region:us" ]
2022-06-27T07:49:18+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2024-02-04T10:05:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/Multi-race_and_Multi-pose_Face_Images_Data ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 23,110 People Multi-race and Multi-pose Face Images Data. This data includes Asian race, Caucasian race, black race, brown race and Indians. Each subject were collected 29 images under different scenes and light conditions. The 29 images include 28 photos (multi light conditions, multiple poses and multiple scenes) + 1 ID photo. This data can be used for face recognition related tasks. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/Multi-race_and_Multi-pose_Face_Images_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n23,110 People Multi-race and Multi-pose Face Images Data. This data includes Asian race, Caucasian race, black race, brown race and Indians. Each subject were collected 29 images under different scenes and light conditions. The 29 images include 28 photos (multi light conditions, multiple poses and multiple scenes) + 1 ID photo. This data can be used for face recognition related tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/Multi-race_and_Multi-pose_Face_Images_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n23,110 People Multi-race and Multi-pose Face Images Data. This data includes Asian race, Caucasian race, black race, brown race and Indians. Each subject were collected 29 images under different scenes and light conditions. The 29 images include 28 photos (multi light conditions, multiple poses and multiple scenes) + 1 ID photo. This data can be used for face recognition related tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 28, 125, 25, 101, 34, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/Multi-race_and_Multi-pose_Face_Images_Data## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n23,110 People Multi-race and Multi-pose Face Images Data. This data includes Asian race, Caucasian race, black race, brown race and Indians. Each subject were collected 29 images under different scenes and light conditions. The 29 images include 28 photos (multi light conditions, multiple poses and multiple scenes) + 1 ID photo. This data can be used for face recognition related tasks.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.### Languages\n\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
615fba6713eb0e6abd1cdc14d0fb4a714a5725a7
# Dataset Card for Nexdata/3D_Instance_Segmentation_and_22_Landmarks_Annotation_Data_of_Human_Body ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1040?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 18,880 Images of 466 People - 3D Instance Segmentation and 22 Landmarks Annotation Data of Human Body. The dataset diversity includes multiple scenes, light conditions, ages, shooting angles, and poses. In terms of annotation, we adpoted instance segmentation annotations on human body. 22 landmarks were also annotated for each human body. The dataset can be used for tasks such as human body instance segmentation and human behavior recognition. For more details, please refer to the link: https://www.nexdata.ai/datasets/1040?source=Huggingface ### Supported Tasks and Leaderboards instance-segmentation, computer-vision,image-segmentation: The dataset can be used to train a model for computer vision. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/3D_Instance_Segmentation_and_22_Landmarks_Annotation_Data_of_Human_Body
[ "region:us" ]
2022-06-27T07:52:04+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2023-08-31T01:47:41+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/3D_Instance_Segmentation_and_22_Landmarks_Annotation_Data_of_Human_Body ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 18,880 Images of 466 People - 3D Instance Segmentation and 22 Landmarks Annotation Data of Human Body. The dataset diversity includes multiple scenes, light conditions, ages, shooting angles, and poses. In terms of annotation, we adpoted instance segmentation annotations on human body. 22 landmarks were also annotated for each human body. The dataset can be used for tasks such as human body instance segmentation and human behavior recognition. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards instance-segmentation, computer-vision,image-segmentation: The dataset can be used to train a model for computer vision. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/3D_Instance_Segmentation_and_22_Landmarks_Annotation_Data_of_Human_Body", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n18,880 Images of 466 People - 3D Instance Segmentation and 22 Landmarks Annotation Data of Human Body. The dataset diversity includes multiple scenes, light conditions, ages, shooting angles, and poses. In terms of annotation, we adpoted instance segmentation annotations on human body. 22 landmarks were also annotated for each human body. The dataset can be used for tasks such as human body instance segmentation and human behavior recognition.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\ninstance-segmentation, computer-vision,image-segmentation: The dataset can be used to train a model for computer vision.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/3D_Instance_Segmentation_and_22_Landmarks_Annotation_Data_of_Human_Body", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n18,880 Images of 466 People - 3D Instance Segmentation and 22 Landmarks Annotation Data of Human Body. The dataset diversity includes multiple scenes, light conditions, ages, shooting angles, and poses. In terms of annotation, we adpoted instance segmentation annotations on human body. 22 landmarks were also annotated for each human body. The dataset can be used for tasks such as human body instance segmentation and human behavior recognition.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\ninstance-segmentation, computer-vision,image-segmentation: The dataset can be used to train a model for computer vision.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 38, 125, 25, 121, 40, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/3D_Instance_Segmentation_and_22_Landmarks_Annotation_Data_of_Human_Body## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n18,880 Images of 466 People - 3D Instance Segmentation and 22 Landmarks Annotation Data of Human Body. The dataset diversity includes multiple scenes, light conditions, ages, shooting angles, and poses. In terms of annotation, we adpoted instance segmentation annotations on human body. 22 landmarks were also annotated for each human body. The dataset can be used for tasks such as human body instance segmentation and human behavior recognition.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\ninstance-segmentation, computer-vision,image-segmentation: The dataset can be used to train a model for computer vision.### Languages\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
39c2168d68066762075fc8e1b89cda7ddb424294
# Dataset Card for Nexdata/Human_Facial_Skin_Defects_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1052?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 4,788 Chinese people 5,105 images Human Facial Skin Defects Data. The data includes the following five types of facial skin defects: acne, acne marks, stains, wrinkles and dark circles. This data can be used for tasks such as skin defects detection. For more details, please refer to the link: https://www.nexdata.ai/datasets/1052?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/Human_Facial_Skin_Defects_Data
[ "region:us" ]
2022-06-27T07:53:34+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2023-08-31T01:40:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/Human_Facial_Skin_Defects_Data ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 4,788 Chinese people 5,105 images Human Facial Skin Defects Data. The data includes the following five types of facial skin defects: acne, acne marks, stains, wrinkles and dark circles. This data can be used for tasks such as skin defects detection. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/Human_Facial_Skin_Defects_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n4,788 Chinese people 5,105 images Human Facial Skin Defects Data. The data includes the following five types of facial skin defects: acne, acne marks, stains, wrinkles and dark circles. This data can be used for tasks such as skin defects detection.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/Human_Facial_Skin_Defects_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n4,788 Chinese people 5,105 images Human Facial Skin Defects Data. The data includes the following five types of facial skin defects: acne, acne marks, stains, wrinkles and dark circles. This data can be used for tasks such as skin defects detection.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 22, 125, 25, 79, 34, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/Human_Facial_Skin_Defects_Data## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n4,788 Chinese people 5,105 images Human Facial Skin Defects Data. The data includes the following five types of facial skin defects: acne, acne marks, stains, wrinkles and dark circles. This data can be used for tasks such as skin defects detection.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.### Languages\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
1c2e8ad09ad62fe92e9ce9dd9e0acd3b0617748b
# Dataset Card for Nexdata/Multi-class_Fashion_Item_Detection_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1057?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 144,810 Images Multi-class Fashion Item Detection Data. In this dataset, 19,968 images of male and 124,842 images of female were included. The Fashion Items were divided into 4 parts based on the season (spring, autumn, summer and winter). In terms of annotation, rectangular bounding boxes were adopted to annotate fashion items. The data can be used for tasks such as fashion items detection, fashion recommendation and other tasks. For more details, please refer to the link: https://www.nexdata.ai/datasets/1057?source=Huggingface ### Supported Tasks and Leaderboards object-detection, computer-vision: The dataset can be used to train a model for object detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/Multi-class_Fashion_Item_Detection_Data
[ "region:us" ]
2022-06-27T07:54:36+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2024-02-04T10:08:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/Multi-class_Fashion_Item_Detection_Data ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 144,810 Images Multi-class Fashion Item Detection Data. In this dataset, 19,968 images of male and 124,842 images of female were included. The Fashion Items were divided into 4 parts based on the season (spring, autumn, summer and winter). In terms of annotation, rectangular bounding boxes were adopted to annotate fashion items. The data can be used for tasks such as fashion items detection, fashion recommendation and other tasks. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards object-detection, computer-vision: The dataset can be used to train a model for object detection. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/Multi-class_Fashion_Item_Detection_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n144,810 Images Multi-class Fashion Item Detection Data. In this dataset, 19,968 images of male and 124,842 images of female were included. The Fashion Items were divided into 4 parts based on the season (spring, autumn, summer and winter). In terms of annotation, rectangular bounding boxes were adopted to annotate fashion items. The data can be used for tasks such as fashion items detection, fashion recommendation and other tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train a model for object detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/Multi-class_Fashion_Item_Detection_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n144,810 Images Multi-class Fashion Item Detection Data. In this dataset, 19,968 images of male and 124,842 images of female were included. The Fashion Items were divided into 4 parts based on the season (spring, autumn, summer and winter). In terms of annotation, rectangular bounding boxes were adopted to annotate fashion items. The data can be used for tasks such as fashion items detection, fashion recommendation and other tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train a model for object detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 25, 125, 25, 121, 34, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/Multi-class_Fashion_Item_Detection_Data## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n144,810 Images Multi-class Fashion Item Detection Data. In this dataset, 19,968 images of male and 124,842 images of female were included. The Fashion Items were divided into 4 parts based on the season (spring, autumn, summer and winter). In terms of annotation, rectangular bounding boxes were adopted to annotate fashion items. The data can be used for tasks such as fashion items detection, fashion recommendation and other tasks.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train a model for object detection.### Languages\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
7c9dfd67fd3763e0f98a5b8e10bd2ff239c55f50
# Dataset Card for Nexdata/3D_Face_Recognition_Images_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1093?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 5,199 People – 3D Face Recognition Images Data. The collection scene is indoor scene. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes multiple facial postures, multiple light conditions, multiple indoor scenes. This data can be used for tasks such as 3D face recognition. For more details, please refer to the link: https://www.nexdata.ai/datasets/1093?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/3D_Face_Recognition_Images_Data
[ "region:us" ]
2022-06-27T07:55:51+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2023-08-31T01:45:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/3D_Face_Recognition_Images_Data ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 5,199 People – 3D Face Recognition Images Data. The collection scene is indoor scene. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes multiple facial postures, multiple light conditions, multiple indoor scenes. This data can be used for tasks such as 3D face recognition. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/3D_Face_Recognition_Images_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n5,199 People – 3D Face Recognition Images Data. The collection scene is indoor scene. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes multiple facial postures, multiple light conditions, multiple indoor scenes. This data can be used for tasks such as 3D face recognition.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/3D_Face_Recognition_Images_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n5,199 People – 3D Face Recognition Images Data. The collection scene is indoor scene. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes multiple facial postures, multiple light conditions, multiple indoor scenes. This data can be used for tasks such as 3D face recognition.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 22, 125, 25, 121, 34, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/3D_Face_Recognition_Images_Data## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n5,199 People – 3D Face Recognition Images Data. The collection scene is indoor scene. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes multiple facial postures, multiple light conditions, multiple indoor scenes. This data can be used for tasks such as 3D face recognition.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.### Languages\n\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
6ffac0292a9d7286a02527b86033c902aa503524
# Dataset Card for Nexdata/3D_Facial_Expressions_Recognition_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1097?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 4,458 People - 3D Facial Expressions Recognition Data. The collection scenes include indoor scenes and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes different expressions, different ages, different races, different collecting scenes. This data can be used for tasks such as 3D facial expression recognition. For more details, please refer to the link: https://www.nexdata.ai/datasets/1097?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/3D_Facial_Expressions_Recognition_Data
[ "region:us" ]
2022-06-27T07:57:14+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2024-02-04T10:03:55+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/3D_Facial_Expressions_Recognition_Data ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 4,458 People - 3D Facial Expressions Recognition Data. The collection scenes include indoor scenes and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes different expressions, different ages, different races, different collecting scenes. This data can be used for tasks such as 3D facial expression recognition. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/3D_Facial_Expressions_Recognition_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n4,458 People - 3D Facial Expressions Recognition Data. The collection scenes include indoor scenes and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes different expressions, different ages, different races, different collecting scenes. This data can be used for tasks such as 3D facial expression recognition.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/3D_Facial_Expressions_Recognition_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n4,458 People - 3D Facial Expressions Recognition Data. The collection scenes include indoor scenes and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes different expressions, different ages, different races, different collecting scenes. This data can be used for tasks such as 3D facial expression recognition.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 22, 125, 25, 132, 34, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/3D_Facial_Expressions_Recognition_Data## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n4,458 People - 3D Facial Expressions Recognition Data. The collection scenes include indoor scenes and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes different expressions, different ages, different races, different collecting scenes. This data can be used for tasks such as 3D facial expression recognition.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.### Languages\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
ba12d0e180e418eecd77bcbc0912303f93e29650
# Dataset Card for Nexdata/3D_Face_Anti_Spoofing_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1172?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 40 People - 3D Living_Face & Anti_Spoofing Data. The collection scenes include indoor and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes various expressions, facial postures, anti-spoofing samples, multiple light conditions, multiple scenes. This data can be used for tasks such as 3D face recognition, 3D Living_Face & Anti_Spoofing. For more details, please refer to the link: https://www.nexdata.ai/datasets/1172?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/3D_Face_Anti_Spoofing_Data
[ "region:us" ]
2022-06-27T07:58:47+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2023-08-31T01:19:54+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/3D_Face_Anti_Spoofing_Data ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 40 People - 3D Living_Face & Anti_Spoofing Data. The collection scenes include indoor and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes various expressions, facial postures, anti-spoofing samples, multiple light conditions, multiple scenes. This data can be used for tasks such as 3D face recognition, 3D Living_Face & Anti_Spoofing. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/3D_Face_Anti_Spoofing_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n40 People - 3D Living_Face & Anti_Spoofing Data. The collection scenes include indoor and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes various expressions, facial postures, anti-spoofing samples, multiple light conditions, multiple scenes. This data can be used for tasks such as 3D face recognition, 3D Living_Face & Anti_Spoofing.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/3D_Face_Anti_Spoofing_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n40 People - 3D Living_Face & Anti_Spoofing Data. The collection scenes include indoor and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes various expressions, facial postures, anti-spoofing samples, multiple light conditions, multiple scenes. This data can be used for tasks such as 3D face recognition, 3D Living_Face & Anti_Spoofing.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 22, 125, 25, 152, 34, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/3D_Face_Anti_Spoofing_Data## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n40 People - 3D Living_Face & Anti_Spoofing Data. The collection scenes include indoor and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes various expressions, facial postures, anti-spoofing samples, multiple light conditions, multiple scenes. This data can be used for tasks such as 3D face recognition, 3D Living_Face & Anti_Spoofing.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\nface-detection, computer-vision: The dataset can be used to train a model for face detection.### Languages\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL" ]
ee51f8ddea4cf23da01f6c4fb1d3e27bc655439b
# Dataset Card for Nexdata/Human_Pose_Recognition_Data ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1132?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 10,000 People - Human Pose Recognition Data. This dataset includes indoor and outdoor scenes.This dataset covers males and females. Age distribution ranges from teenager to the elderly, the middle-aged and young people are the majorities. The data diversity includes different shooting heights, different ages, different light conditions, different collecting environment, clothes in different seasons, multiple human poses. For each subject, the labels of gender, race, age, collecting environment and clothes were annotated. The data can be used for human pose recognition and other tasks. For more details, please refer to the link: https://www.nexdata.ai/datasets/1132?source=Huggingface ### Supported Tasks and Leaderboards object-detection, computer-vision: The dataset can be used to train a model for object detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/Human_Pose_Recognition_Data
[ "region:us" ]
2022-06-27T08:00:05+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2023-08-31T01:20:18+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/Human_Pose_Recognition_Data ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 10,000 People - Human Pose Recognition Data. This dataset includes indoor and outdoor scenes.This dataset covers males and females. Age distribution ranges from teenager to the elderly, the middle-aged and young people are the majorities. The data diversity includes different shooting heights, different ages, different light conditions, different collecting environment, clothes in different seasons, multiple human poses. For each subject, the labels of gender, race, age, collecting environment and clothes were annotated. The data can be used for human pose recognition and other tasks. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards object-detection, computer-vision: The dataset can be used to train a model for object detection. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/Human_Pose_Recognition_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n10,000 People - Human Pose Recognition Data. This dataset includes indoor and outdoor scenes.This dataset covers males and females. Age distribution ranges from teenager to the elderly, the middle-aged and young people are the majorities. The data diversity includes different shooting heights, different ages, different light conditions, different collecting environment, clothes in different seasons, multiple human poses. For each subject, the labels of gender, race, age, collecting environment and clothes were annotated. The data can be used for human pose recognition and other tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train a model for object detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/Human_Pose_Recognition_Data", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n10,000 People - Human Pose Recognition Data. This dataset includes indoor and outdoor scenes.This dataset covers males and females. Age distribution ranges from teenager to the elderly, the middle-aged and young people are the majorities. The data diversity includes different shooting heights, different ages, different light conditions, different collecting environment, clothes in different seasons, multiple human poses. For each subject, the labels of gender, race, age, collecting environment and clothes were annotated. The data can be used for human pose recognition and other tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train a model for object detection.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 19, 125, 25, 146, 34, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/Human_Pose_Recognition_Data## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n10,000 People - Human Pose Recognition Data. This dataset includes indoor and outdoor scenes.This dataset covers males and females. Age distribution ranges from teenager to the elderly, the middle-aged and young people are the majorities. The data diversity includes different shooting heights, different ages, different light conditions, different collecting environment, clothes in different seasons, multiple human poses. For each subject, the labels of gender, race, age, collecting environment and clothes were annotated. The data can be used for human pose recognition and other tasks.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train a model for object detection.### Languages\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
1500d43a2644de5019ea2b760be94f6fc742186f
# Dataset Card for Nexdata/Re-ID_Data_in_Surveillance_Scenes ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1129?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 10,000 People - Re-ID Data in Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. The data can be used for re-id and other tasks. For more details, please refer to the link: https://www.nexdata.ai/datasets/1129?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/Re-ID_Data_in_Surveillance_Scenes
[ "region:us" ]
2022-06-27T08:01:22+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2023-08-31T01:20:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/Re-ID_Data_in_Surveillance_Scenes ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 10,000 People - Re-ID Data in Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. The data can be used for re-id and other tasks. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/Re-ID_Data_in_Surveillance_Scenes", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n10,000 People - Re-ID Data in Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. The data can be used for re-id and other tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/Re-ID_Data_in_Surveillance_Scenes", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n10,000 People - Re-ID Data in Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. The data can be used for re-id and other tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 24, 125, 25, 138, 34, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/Re-ID_Data_in_Surveillance_Scenes## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n10,000 People - Re-ID Data in Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. The data can be used for re-id and other tasks.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\nface-detection, computer-vision: The dataset can be used to train a model for face detection.### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
d64358c584127a2e0478e7b5d2ac8c88a5257337
# Dataset Card for Nexdata/Re-ID_Data_in_Real_Surveillance_Scenes ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.nexdata.ai/datasets/1160?source=Huggingface - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary 10,000 People - Re-ID Data in Real Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. This data can be used for re-id and other tasks. For more details, please refer to the link: https://www.nexdata.ai/datasets/1160?source=Huggingface ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Commerical License: https://drive.google.com/file/d/1saDCPm74D4UWfBL17VbkTsZLGfpOQj1J/view?usp=sharing ### Citation Information [More Information Needed] ### Contributions
Nexdata/Re-ID_Data_in_Real_Surveillance_Scenes
[ "region:us" ]
2022-06-27T08:02:41+00:00
{"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]}
2023-08-31T01:19:21+00:00
[]
[]
TAGS #region-us
# Dataset Card for Nexdata/Re-ID_Data_in_Real_Surveillance_Scenes ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary 10,000 People - Re-ID Data in Real Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. This data can be used for re-id and other tasks. For more details, please refer to the link: URL ### Supported Tasks and Leaderboards face-detection, computer-vision: The dataset can be used to train a model for face detection. ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Commerical License: URL ### Contributions
[ "# Dataset Card for Nexdata/Re-ID_Data_in_Real_Surveillance_Scenes", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n10,000 People - Re-ID Data in Real Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. This data can be used for re-id and other tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Nexdata/Re-ID_Data_in_Real_Surveillance_Scenes", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n10,000 People - Re-ID Data in Real Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. This data can be used for re-id and other tasks.\n \nFor more details, please refer to the link: URL", "### Supported Tasks and Leaderboards\nface-detection, computer-vision: The dataset can be used to train a model for face detection.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCommerical License: URL", "### Contributions" ]
[ 6, 26, 125, 25, 139, 34, 5, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 12, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Nexdata/Re-ID_Data_in_Real_Surveillance_Scenes## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n10,000 People - Re-ID Data in Real Surveillance Scenes. The data includes indoor scenes and outdoor scenes. The data includes males and females, and the age distribution is from children to the elderly. The data diversity includes different age groups, different time periods, different shooting angles, different human body orientations and postures, clothing for different seasons. For annotation, the rectangular bounding boxes and 15 attributes of human body were annotated. This data can be used for re-id and other tasks.\n \nFor more details, please refer to the link: URL### Supported Tasks and Leaderboards\nface-detection, computer-vision: The dataset can be used to train a model for face detection.### Languages\n\nEnglish## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nCommerical License: URL### Contributions" ]
2c95cdfe58d678f87ab94dd93c5a2b61335d37fe
# Dataset Card for CA-ZH Wikipedia datasets ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [[email protected]]([email protected]) ### Dataset Summary The CA-ZH Parallel Corpus is a Catalan-Chinese dataset of mutual translations automatically crawled from Wikipedia. Two separate corpora are included, namely CA-ZH 1.05 Wikipedia and CA-ZH 1.10 Wikipedia, the latter has better general quality than the former. The dataset was created to support Catalan NLP tasks, e.g., Machine Translation. ### Supported Tasks and Leaderboards The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score. The dataset can be used to finetune a large-scale multilingual MT system such as m2m-100. ### Languages The texts in the dataset are in Catalan and Chinese. ## Dataset Structure ### Data Instances A typical data point comprises a pair of translations in Catalan and Chinese. An example from the Ca-Zh Parallel Corpus looks as follows: ``` { "ca": "1591è Batalló Separat d'Artilleria autorpopulsada", "zh": "第1591自走砲营" } ``` ### Data Fields - "ca": Text in Catalan. - "zh": Text in Chinese. ### Data Splits The dataset contains a single split: `train`. ## Dataset Creation ### Curation Rationale The Ca-Zh Parallel Corpus was built to provide more language data for MT tasks dedicated to low-resource languages. The dataset was built by gathering texts on the same topic in Catalan and Chinese from Wikipedia. ### Source Data #### Initial Data Collection and Normalization The data was obtained by automatic crawling, a quality filter was applied to improve the data quality. The original Chinese data was mixed into Traditional Chinese and Simplified Chinese, a simplification process was conducted in order to guarantee the unification. #### Who are the source language producers? All the texts in this dataset come from the Wikipedia. ### Annotations The dataset is unannotated. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information No anonymisation process was performed. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan. ### Discussion of Biases We are aware that since the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact. ### Other Known Limitations Wikipedia provides data of a more general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use. ## Additional Information ### Dataset Curators Carlos Escolano, Chenuye Zhou and Zixuan Liu, Barcelona Supercomputing Center (cescolano3 at gmail dot com) This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Licensing Information [Creative Commons Attribution Share Alike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/). ### Citation Information ``` @mastersthesis{MasterThesisChenuyeZhou, author = "Chenuye Zhou", title = "Building a Catalan-Chinese parallel corpus for use in MT", school = "Universitat Pompeu Fabra", year = 2022, address = "Barcelona", url = "https://repositori.upf.edu/handle/10230/54140" } @mastersthesis{MasterThesisZixuanLiu, author = "Zixuan Liu", title = "Improving Chinese-Catalan Machine Translation with Wikipedia Parallel", school = "Universitat Pompeu Fabra", year = 2022, address = "Barcelona", url= "https://repositori.upf.edu/handle/10230/54142" } ```
projecte-aina/ca_zh_wikipedia
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:translation", "size_categories:10K<n<100K", "source_datasets:original", "language:ca", "language:zh", "language:multilingual", "license:cc-by-4.0", "region:us" ]
2022-06-27T08:03:00+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["ca", "zh", "multilingual"], "license": ["cc-by-4.0"], "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CA-ZH Wikipedia Parallel Corpus"}
2023-01-09T07:56:07+00:00
[]
[ "ca", "zh", "multilingual" ]
TAGS #task_categories-translation #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-Catalan #language-Chinese #language-multilingual #license-cc-by-4.0 #region-us
# Dataset Card for CA-ZH Wikipedia datasets ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: cescolano3@URL ### Dataset Summary The CA-ZH Parallel Corpus is a Catalan-Chinese dataset of mutual translations automatically crawled from Wikipedia. Two separate corpora are included, namely CA-ZH 1.05 Wikipedia and CA-ZH 1.10 Wikipedia, the latter has better general quality than the former. The dataset was created to support Catalan NLP tasks, e.g., Machine Translation. ### Supported Tasks and Leaderboards The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score. The dataset can be used to finetune a large-scale multilingual MT system such as m2m-100. ### Languages The texts in the dataset are in Catalan and Chinese. ## Dataset Structure ### Data Instances A typical data point comprises a pair of translations in Catalan and Chinese. An example from the Ca-Zh Parallel Corpus looks as follows: ### Data Fields - "ca": Text in Catalan. - "zh": Text in Chinese. ### Data Splits The dataset contains a single split: 'train'. ## Dataset Creation ### Curation Rationale The Ca-Zh Parallel Corpus was built to provide more language data for MT tasks dedicated to low-resource languages. The dataset was built by gathering texts on the same topic in Catalan and Chinese from Wikipedia. ### Source Data #### Initial Data Collection and Normalization The data was obtained by automatic crawling, a quality filter was applied to improve the data quality. The original Chinese data was mixed into Traditional Chinese and Simplified Chinese, a simplification process was conducted in order to guarantee the unification. #### Who are the source language producers? All the texts in this dataset come from the Wikipedia. ### Annotations The dataset is unannotated. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information No anonymisation process was performed. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan. ### Discussion of Biases We are aware that since the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact. ### Other Known Limitations Wikipedia provides data of a more general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use. ## Additional Information ### Dataset Curators Carlos Escolano, Chenuye Zhou and Zixuan Liu, Barcelona Supercomputing Center (cescolano3 at gmail dot com) This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA. ### Licensing Information Creative Commons Attribution Share Alike 4.0 International.
[ "# Dataset Card for CA-ZH Wikipedia datasets", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: cescolano3@URL", "### Dataset Summary\n\nThe CA-ZH Parallel Corpus is a Catalan-Chinese dataset of mutual translations automatically crawled from Wikipedia. Two separate corpora are included, namely CA-ZH 1.05 Wikipedia and CA-ZH 1.10 Wikipedia, the latter has better general quality than the former. The dataset was created to support Catalan NLP tasks, e.g., Machine Translation.", "### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score. The dataset can be used to finetune a large-scale multilingual MT system such as m2m-100.", "### Languages\n\nThe texts in the dataset are in Catalan and Chinese.", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises a pair of translations in Catalan and Chinese. An example from the Ca-Zh Parallel Corpus looks as follows:", "### Data Fields\n\n- \"ca\": Text in Catalan.\n- \"zh\": Text in Chinese.", "### Data Splits\n\nThe dataset contains a single split: 'train'.", "## Dataset Creation", "### Curation Rationale\n\nThe Ca-Zh Parallel Corpus was built to provide more language data for MT tasks dedicated to low-resource languages. The dataset was built by gathering texts on the same topic in Catalan and Chinese from Wikipedia.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data was obtained by automatic crawling, a quality filter was applied to improve the data quality. The original Chinese data was mixed into Traditional Chinese and Simplified Chinese, a simplification process was conducted in order to guarantee the unification.", "#### Who are the source language producers?\n\nAll the texts in this dataset come from the Wikipedia.", "### Annotations\n\nThe dataset is unannotated.", "#### Annotation process\n\n[N/A]", "#### Who are the annotators?\n\n[N/A]", "### Personal and Sensitive Information\n\nNo anonymisation process was performed.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan.", "### Discussion of Biases\n\nWe are aware that since the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.", "### Other Known Limitations\n\nWikipedia provides data of a more general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.", "## Additional Information", "### Dataset Curators\n\nCarlos Escolano, Chenuye Zhou and Zixuan Liu, Barcelona Supercomputing Center (cescolano3 at gmail dot com)\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.", "### Licensing Information\n\nCreative Commons Attribution Share Alike 4.0 International." ]
[ "TAGS\n#task_categories-translation #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-Catalan #language-Chinese #language-multilingual #license-cc-by-4.0 #region-us \n", "# Dataset Card for CA-ZH Wikipedia datasets", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: cescolano3@URL", "### Dataset Summary\n\nThe CA-ZH Parallel Corpus is a Catalan-Chinese dataset of mutual translations automatically crawled from Wikipedia. Two separate corpora are included, namely CA-ZH 1.05 Wikipedia and CA-ZH 1.10 Wikipedia, the latter has better general quality than the former. The dataset was created to support Catalan NLP tasks, e.g., Machine Translation.", "### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score. The dataset can be used to finetune a large-scale multilingual MT system such as m2m-100.", "### Languages\n\nThe texts in the dataset are in Catalan and Chinese.", "## Dataset Structure", "### Data Instances\n\nA typical data point comprises a pair of translations in Catalan and Chinese. An example from the Ca-Zh Parallel Corpus looks as follows:", "### Data Fields\n\n- \"ca\": Text in Catalan.\n- \"zh\": Text in Chinese.", "### Data Splits\n\nThe dataset contains a single split: 'train'.", "## Dataset Creation", "### Curation Rationale\n\nThe Ca-Zh Parallel Corpus was built to provide more language data for MT tasks dedicated to low-resource languages. The dataset was built by gathering texts on the same topic in Catalan and Chinese from Wikipedia.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe data was obtained by automatic crawling, a quality filter was applied to improve the data quality. The original Chinese data was mixed into Traditional Chinese and Simplified Chinese, a simplification process was conducted in order to guarantee the unification.", "#### Who are the source language producers?\n\nAll the texts in this dataset come from the Wikipedia.", "### Annotations\n\nThe dataset is unannotated.", "#### Annotation process\n\n[N/A]", "#### Who are the annotators?\n\n[N/A]", "### Personal and Sensitive Information\n\nNo anonymisation process was performed.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop Machines Translation tasks for low-resource languages such as Catalan.", "### Discussion of Biases\n\nWe are aware that since the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset. Nonetheless, we have not applied any steps to reduce their impact.", "### Other Known Limitations\n\nWikipedia provides data of a more general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.", "## Additional Information", "### Dataset Curators\n\nCarlos Escolano, Chenuye Zhou and Zixuan Liu, Barcelona Supercomputing Center (cescolano3 at gmail dot com)\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.", "### Licensing Information\n\nCreative Commons Attribution Share Alike 4.0 International." ]
[ 91, 12, 112, 30, 84, 71, 17, 6, 37, 23, 19, 5, 56, 4, 63, 23, 14, 10, 14, 16, 8, 33, 59, 41, 5, 76, 15 ]
[ "passage: TAGS\n#task_categories-translation #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-Catalan #language-Chinese #language-multilingual #license-cc-by-4.0 #region-us \n# Dataset Card for CA-ZH Wikipedia datasets## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: cescolano3@URL### Dataset Summary\n\nThe CA-ZH Parallel Corpus is a Catalan-Chinese dataset of mutual translations automatically crawled from Wikipedia. Two separate corpora are included, namely CA-ZH 1.05 Wikipedia and CA-ZH 1.10 Wikipedia, the latter has better general quality than the former. The dataset was created to support Catalan NLP tasks, e.g., Machine Translation.### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score. The dataset can be used to finetune a large-scale multilingual MT system such as m2m-100.### Languages\n\nThe texts in the dataset are in Catalan and Chinese.## Dataset Structure### Data Instances\n\nA typical data point comprises a pair of translations in Catalan and Chinese. An example from the Ca-Zh Parallel Corpus looks as follows:### Data Fields\n\n- \"ca\": Text in Catalan.\n- \"zh\": Text in Chinese.### Data Splits\n\nThe dataset contains a single split: 'train'.## Dataset Creation" ]
2ccf6f2cb7b6d504ef59891456deb65f3431c8a3
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: distilbert-base-uncased-finetuned-sst-2-english * Dataset: glue To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6489fc46-7764973
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T08:22:36+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "distilbert-base-uncased-finetuned-sst-2-english", "metrics": [], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-06-27T08:23:03+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Text Classification * Model: distilbert-base-uncased-finetuned-sst-2-english * Dataset: glue To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: distilbert-base-uncased-finetuned-sst-2-english\n* Dataset: glue\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: distilbert-base-uncased-finetuned-sst-2-english\n* Dataset: glue\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: distilbert-base-uncased-finetuned-sst-2-english\n* Dataset: glue\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
36852a8d4551d9fe26d0971a915480590d2f2a21
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: winegarj/distilbert-base-uncased-finetuned-sst2 * Dataset: glue To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6489fc46-7764981
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T08:23:19+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "winegarj/distilbert-base-uncased-finetuned-sst2", "metrics": [], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-06-27T08:23:45+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Text Classification * Model: winegarj/distilbert-base-uncased-finetuned-sst2 * Dataset: glue To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: winegarj/distilbert-base-uncased-finetuned-sst2\n* Dataset: glue\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: winegarj/distilbert-base-uncased-finetuned-sst2\n* Dataset: glue\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 89, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: winegarj/distilbert-base-uncased-finetuned-sst2\n* Dataset: glue\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
878c8cb9d1166558036c1b3da41266e9fa599fe2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/convnext-tiny-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f9a2c1a2-7774983
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T09:55:12+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lewtun/dog_food"], "eval_info": {"task": "image_multi_class_classification", "model": "abhishek/convnext-tiny-finetuned-dogfood", "metrics": ["matthews_correlation"], "dataset_name": "lewtun/dog_food", "dataset_config": "lewtun--dog_food", "dataset_split": "test", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-27T09:55:43+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/convnext-tiny-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/convnext-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/convnext-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/convnext-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
63e660d3fa5ab454cc0ab1a253df2e0fccfac868
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: douwekiela/resnet-18-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f9a2c1a2-7774984
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T09:55:19+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lewtun/dog_food"], "eval_info": {"task": "image_multi_class_classification", "model": "douwekiela/resnet-18-finetuned-dogfood", "metrics": ["matthews_correlation"], "dataset_name": "lewtun/dog_food", "dataset_config": "lewtun--dog_food", "dataset_split": "test", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-27T09:56:00+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: douwekiela/resnet-18-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: douwekiela/resnet-18-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: douwekiela/resnet-18-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 89, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: douwekiela/resnet-18-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
8052418bce17b26fdb4b05523c8367c98ec4f330
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: sasha/swin-tiny-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-f9a2c1a2-7774985
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T09:55:23+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lewtun/dog_food"], "eval_info": {"task": "image_multi_class_classification", "model": "sasha/swin-tiny-finetuned-dogfood", "metrics": ["matthews_correlation"], "dataset_name": "lewtun/dog_food", "dataset_config": "lewtun--dog_food", "dataset_split": "test", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-27T09:56:06+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: sasha/swin-tiny-finetuned-dogfood * Dataset: lewtun/dog_food To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: sasha/swin-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: sasha/swin-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: sasha/swin-tiny-finetuned-dogfood\n* Dataset: lewtun/dog_food\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
08e29bba8fed7e588839acf4022b2dcea84382d4
This dataset contains Twitter information from AK92501
CShorten/Tweets-from-AK
[ "region:us" ]
2022-06-27T11:01:57+00:00
{}
2022-07-12T20:53:20+00:00
[]
[]
TAGS #region-us
This dataset contains Twitter information from AK92501
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
688e7d96e99cd5730a17a5c55b0964d27a486904
The Dataset contains images derived from the [Newspaper Navigator](https://news-navigator.labs.loc.gov/), a dataset of images drawn from the Library of Congress Chronicling America collection (chroniclingamerica.loc.gov/). > [The Newspaper Navigator dataset](https://news-navigator.labs.loc.gov/) consists of extracted visual content for 16,358,041 historic newspaper pages in Chronicling America. The visual content was identified using an object detection model trained on annotations of World War 1-era Chronicling America pages, including annotations made by volunteers as part of the Beyond Words crowdsourcing project. source: https://news-navigator.labs.loc.gov/ One of these categories is 'advertisements'. This dataset contains a sample of these images with additional labels indicating if the advert is 'illustrated' or 'not illustrated'. This dataset was created for use in a [Programming Historian tutorial](http://programminghistorian.github.io/ph-submissions/lessons/computer-vision-deep-learning-pt1). The primary aim of the data was to provide a realistic example dataset for teaching computer vision for working with digitised heritage material. # Dataset Card for 19th Century United States Newspaper Advert images with 'illustrated' or 'non illustrated' labels ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:**[https://doi.org/10.5281/zenodo.5838410](https://doi.org/10.5281/zenodo.5838410) - **Paper:**[https://doi.org/10.46430/phen0101](https://doi.org/10.46430/phen0101) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Dataset contains images derived from the [Newspaper Navigator](news-navigator.labs.loc.gov/), a dataset of images drawn from the Library of Congress Chronicling America collection (chroniclingamerica.loc.gov/). > [The Newspaper Navigator dataset](https://news-navigator.labs.loc.gov/) consists of extracted visual content for 16,358,041 historic newspaper pages in Chronicling America. The visual content was identified using an object detection model trained on annotations of World War 1-era Chronicling America pages, including annotations made by volunteers as part of the Beyond Words crowdsourcing project. source: https://news-navigator.labs.loc.gov/ One of these categories is 'advertisements. This dataset contains a sample of these images with additional labels indicating if the advert is 'illustrated' or 'not illustrated'. This dataset was created for use in a [Programming Historian tutorial](http://programminghistorian.github.io/ph-submissions/lessons/computer-vision-deep-learning-pt1). The primary aim of the data was to provide a realistic example dataset for teaching computer vision for working with digitised heritage material. ### Supported Tasks and Leaderboards - `image-classification`: the primary purpose of this dataset is for classifying historic newspaper images identified as being 'advertisements' into 'illustrated' and 'not-illustrated' categories. ### Languages [More Information Needed] ## Dataset Structure ### Data Instances An example instance from this dataset ``` python {'file': 'pst_fenske_ver02_data_sn84026497_00280776129_1880042101_0834_002_6_96.jpg', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=L size=388x395 at 0x7F9A72038950>, 'label': 0, 'pub_date': Timestamp('1880-04-21 00:00:00'), 'page_seq_num': 834, 'edition_seq_num': 1, 'batch': 'pst_fenske_ver02', 'lccn': 'sn84026497', 'box': [0.649412214756012, 0.6045778393745422, 0.8002520799636841, 0.7152365446090698], 'score': 0.9609346985816956, 'ocr': "H. II. IIASLKT & SOXN, Dealers in General Merchandise In New Store Room nt HASLET'S COS ITERS, 'JTionoMtii, ln. .Tau'y 1st, 1?0.", 'place_of_publication': 'Tionesta, Pa.', 'geographic_coverage': "['Pennsylvania--Forest--Tionesta']", 'name': 'The Forest Republican. [volume]', 'publisher': 'Ed. W. Smiley', 'url': 'https://news-navigator.labs.loc.gov/data/pst_fenske_ver02/data/sn84026497/00280776129/1880042101/0834/002_6_96.jpg', 'page_url': 'https://chroniclingamerica.loc.gov/data/batches/pst_fenske_ver02/data/sn84026497/00280776129/1880042101/0834.jp2'} ``` ### Data Fields [More Information Needed] ### Data Splits The dataset contains a single split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process A description of the annotation process is outlined in this [GitHub repository](https://github.com/Living-with-machines/nnanno) [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` bibtex @dataset{van_strien_daniel_2021_5838410, author = {van Strien, Daniel}, title = {{19th Century United States Newspaper Advert images with 'illustrated' or 'non illustrated' labels}}, month = oct, year = 2021, publisher = {Zenodo}, version = {0.0.1}, doi = {10.5281/zenodo.5838410}, url = {https://doi.org/10.5281/zenodo.5838410}} ``` [More Information Needed] ### Contributions Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
biglam/illustrated_ads
[ "task_categories:image-classification", "task_ids:multi-class-image-classification", "annotations_creators:expert-generated", "size_categories:n<1K", "license:cc0-1.0", "lam", "historic newspapers", "region:us" ]
2022-06-27T13:14:29+00:00
{"annotations_creators": ["expert-generated"], "language_creators": [], "language": [], "license": ["cc0-1.0"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "19th Century United States Newspaper Advert images with 'illustrated' or 'non illustrated' labels", "tags": ["lam", "historic newspapers"]}
2023-01-18T20:38:15+00:00
[]
[]
TAGS #task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #size_categories-n<1K #license-cc0-1.0 #lam #historic newspapers #region-us
The Dataset contains images derived from the Newspaper Navigator, a dataset of images drawn from the Library of Congress Chronicling America collection (URL > The Newspaper Navigator dataset consists of extracted visual content for 16,358,041 historic newspaper pages in Chronicling America. The visual content was identified using an object detection model trained on annotations of World War 1-era Chronicling America pages, including annotations made by volunteers as part of the Beyond Words crowdsourcing project. source: URL One of these categories is 'advertisements'. This dataset contains a sample of these images with additional labels indicating if the advert is 'illustrated' or 'not illustrated'. This dataset was created for use in a Programming Historian tutorial. The primary aim of the data was to provide a realistic example dataset for teaching computer vision for working with digitised heritage material. # Dataset Card for 19th Century United States Newspaper Advert images with 'illustrated' or 'non illustrated' labels ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository:URL - Paper:URL - Leaderboard: - Point of Contact: ### Dataset Summary The Dataset contains images derived from the Newspaper Navigator, a dataset of images drawn from the Library of Congress Chronicling America collection (URL > The Newspaper Navigator dataset consists of extracted visual content for 16,358,041 historic newspaper pages in Chronicling America. The visual content was identified using an object detection model trained on annotations of World War 1-era Chronicling America pages, including annotations made by volunteers as part of the Beyond Words crowdsourcing project. source: URL One of these categories is 'advertisements. This dataset contains a sample of these images with additional labels indicating if the advert is 'illustrated' or 'not illustrated'. This dataset was created for use in a Programming Historian tutorial. The primary aim of the data was to provide a realistic example dataset for teaching computer vision for working with digitised heritage material. ### Supported Tasks and Leaderboards - 'image-classification': the primary purpose of this dataset is for classifying historic newspaper images identified as being 'advertisements' into 'illustrated' and 'not-illustrated' categories. ### Languages ## Dataset Structure ### Data Instances An example instance from this dataset ### Data Fields ### Data Splits The dataset contains a single split. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process A description of the annotation process is outlined in this GitHub repository #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @davanstrien for adding this dataset.
[ "# Dataset Card for 19th Century United States Newspaper Advert images with 'illustrated' or 'non illustrated' labels", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:URL\n- Paper:URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe Dataset contains images derived from the Newspaper Navigator, a dataset of images drawn from the Library of Congress Chronicling America collection (URL \n\n> The Newspaper Navigator dataset consists of extracted visual content for 16,358,041 historic newspaper pages in Chronicling America. The visual content was identified using an object detection model trained on annotations of World War 1-era Chronicling America pages, including annotations made by volunteers as part of the Beyond Words crowdsourcing project. source: URL\n\nOne of these categories is 'advertisements. This dataset contains a sample of these images with additional labels indicating if the advert is 'illustrated' or 'not illustrated'.\n\nThis dataset was created for use in a Programming Historian tutorial. The primary aim of the data was to provide a realistic example dataset for teaching computer vision for working with digitised heritage material.", "### Supported Tasks and Leaderboards\n\n- 'image-classification': the primary purpose of this dataset is for classifying historic newspaper images identified as being 'advertisements' into 'illustrated' and 'not-illustrated' categories.", "### Languages", "## Dataset Structure", "### Data Instances\n\nAn example instance from this dataset", "### Data Fields", "### Data Splits\n\nThe dataset contains a single split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nA description of the annotation process is outlined in this GitHub repository", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @davanstrien for adding this dataset." ]
[ "TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #size_categories-n<1K #license-cc0-1.0 #lam #historic newspapers #region-us \n", "# Dataset Card for 19th Century United States Newspaper Advert images with 'illustrated' or 'non illustrated' labels", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:URL\n- Paper:URL\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe Dataset contains images derived from the Newspaper Navigator, a dataset of images drawn from the Library of Congress Chronicling America collection (URL \n\n> The Newspaper Navigator dataset consists of extracted visual content for 16,358,041 historic newspaper pages in Chronicling America. The visual content was identified using an object detection model trained on annotations of World War 1-era Chronicling America pages, including annotations made by volunteers as part of the Beyond Words crowdsourcing project. source: URL\n\nOne of these categories is 'advertisements. This dataset contains a sample of these images with additional labels indicating if the advert is 'illustrated' or 'not illustrated'.\n\nThis dataset was created for use in a Programming Historian tutorial. The primary aim of the data was to provide a realistic example dataset for teaching computer vision for working with digitised heritage material.", "### Supported Tasks and Leaderboards\n\n- 'image-classification': the primary purpose of this dataset is for classifying historic newspaper images identified as being 'advertisements' into 'illustrated' and 'not-illustrated' categories.", "### Languages", "## Dataset Structure", "### Data Instances\n\nAn example instance from this dataset", "### Data Fields", "### Data Splits\n\nThe dataset contains a single split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\nA description of the annotation process is outlined in this GitHub repository", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @davanstrien for adding this dataset." ]
[ 69, 28, 125, 26, 207, 58, 4, 6, 13, 5, 14, 5, 7, 4, 10, 10, 5, 24, 9, 8, 8, 7, 8, 7, 5, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #size_categories-n<1K #license-cc0-1.0 #lam #historic newspapers #region-us \n# Dataset Card for 19th Century United States Newspaper Advert images with 'illustrated' or 'non illustrated' labels## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:URL\n- Paper:URL\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThe Dataset contains images derived from the Newspaper Navigator, a dataset of images drawn from the Library of Congress Chronicling America collection (URL \n\n> The Newspaper Navigator dataset consists of extracted visual content for 16,358,041 historic newspaper pages in Chronicling America. The visual content was identified using an object detection model trained on annotations of World War 1-era Chronicling America pages, including annotations made by volunteers as part of the Beyond Words crowdsourcing project. source: URL\n\nOne of these categories is 'advertisements. This dataset contains a sample of these images with additional labels indicating if the advert is 'illustrated' or 'not illustrated'.\n\nThis dataset was created for use in a Programming Historian tutorial. The primary aim of the data was to provide a realistic example dataset for teaching computer vision for working with digitised heritage material." ]
7c595bcd1b0f21cba1280c14a549dc28a64e8114
# Dataset Card for askD ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/ju-resplande/askD - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [ELI5 dataset](https://huggingface.co/datasets/eli5) adapted on [Medical Questions (AskDocs)](https://www.reddit.com/r/AskDocs/) subreddit. We additionally translated to Portuguese and used <a href="https://github.com/LasseRegin/medical-question-answer-data"> external data from here<a>. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language data in AskD is English (BCP-47 en) and Brazilian Portuguese (BCP-47 pt-BR) ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits | | Train | Valid | Test | External | | ----- | ------ | ----- | ---- | -------- | | en | 24256 | 5198 | 5198 | 166804 | | pt | 24256 | 5198 | 5198 | 166804 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data The dataset questions and answers span a period from January 2013 to December 2019. #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ```bibtex @misc{Gomes20202, author = {GOMES, J. R. S.}, title = {PLUE: Portuguese Language Understanding Evaluation}, year = {2020}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/ju-resplande/askD}}, commit = {42060c4402c460e174cbb75a868b429c554ba2b7} } ``` ### Contributions Thanks to [@ju-resplande](https://github.com/ju-resplande) for adding this dataset.
ju-resplande/askD
[ "task_categories:text2text-generation", "task_ids:abstractive-qa", "task_ids:closed-domain-qa", "annotations_creators:no-annotation", "language_creators:found", "language_creators:machine-generated", "multilinguality:multilingual", "multilinguality:translation", "size_categories:100K<n<1M", "source_datasets:extended|eli5", "language:en", "language:pt", "license:lgpl-3.0", "region:us" ]
2022-06-27T14:26:30+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found", "machine-generated"], "language": ["en", "pt"], "license": ["lgpl-3.0"], "multilinguality": ["multilingual", "translation"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|eli5"], "task_categories": ["text2text-generation"], "task_ids": ["abstractive-qa", "closed-domain-qa"], "pretty_name": "AskDocs"}
2022-10-29T11:19:35+00:00
[]
[ "en", "pt" ]
TAGS #task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-closed-domain-qa #annotations_creators-no-annotation #language_creators-found #language_creators-machine-generated #multilinguality-multilingual #multilinguality-translation #size_categories-100K<n<1M #source_datasets-extended|eli5 #language-English #language-Portuguese #license-lgpl-3.0 #region-us
Dataset Card for askD ===================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: URL * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary ELI5 dataset adapted on Medical Questions (AskDocs) subreddit. We additionally translated to Portuguese and used <a href="URL external data from here. ### Supported Tasks and Leaderboards ### Languages The language data in AskD is English (BCP-47 en) and Brazilian Portuguese (BCP-47 pt-BR) Dataset Structure ----------------- ### Data Instances ### Data Fields ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data The dataset questions and answers span a period from January 2013 to December 2019. #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @ju-resplande for adding this dataset.
[ "### Dataset Summary\n\n\nELI5 dataset adapted on Medical Questions (AskDocs) subreddit.\nWe additionally translated to Portuguese and used <a href=\"URL external data from here.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language data in AskD is English (BCP-47 en) and Brazilian Portuguese (BCP-47 pt-BR)\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe dataset questions and answers span a period from January 2013 to December 2019.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ju-resplande for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-closed-domain-qa #annotations_creators-no-annotation #language_creators-found #language_creators-machine-generated #multilinguality-multilingual #multilinguality-translation #size_categories-100K<n<1M #source_datasets-extended|eli5 #language-English #language-Portuguese #license-lgpl-3.0 #region-us \n", "### Dataset Summary\n\n\nELI5 dataset adapted on Medical Questions (AskDocs) subreddit.\nWe additionally translated to Portuguese and used <a href=\"URL external data from here.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language data in AskD is English (BCP-47 en) and Brazilian Portuguese (BCP-47 pt-BR)\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data\n\n\nThe dataset questions and answers span a period from January 2013 to December 2019.", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @ju-resplande for adding this dataset." ]
[ 133, 50, 10, 39, 6, 5, 11, 7, 20, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-text2text-generation #task_ids-abstractive-qa #task_ids-closed-domain-qa #annotations_creators-no-annotation #language_creators-found #language_creators-machine-generated #multilinguality-multilingual #multilinguality-translation #size_categories-100K<n<1M #source_datasets-extended|eli5 #language-English #language-Portuguese #license-lgpl-3.0 #region-us \n### Dataset Summary\n\n\nELI5 dataset adapted on Medical Questions (AskDocs) subreddit.\nWe additionally translated to Portuguese and used <a href=\"URL external data from here.### Supported Tasks and Leaderboards### Languages\n\n\nThe language data in AskD is English (BCP-47 en) and Brazilian Portuguese (BCP-47 pt-BR)\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data\n\n\nThe dataset questions and answers span a period from January 2013 to December 2019.#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @ju-resplande for adding this dataset." ]
1deb256b25446474684c28662d748709b552aa14
Side-by-side images of Dragon Ball scenes. On the left: A grayscale outline of the scene. On the right: A colored version of the same scene. The data was taken from downloaded Dragon Ball episodes and preprocessed using OpenCV to remove color and take the outlines of the drawings. Then the pre-processed and post-processed images were concatenated side by side.
ShohamWeiss/Dragon_Ball_Colorization
[ "license:apache-2.0", "region:us" ]
2022-06-27T19:21:21+00:00
{"license": "apache-2.0"}
2022-06-27T19:25:56+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Side-by-side images of Dragon Ball scenes. On the left: A grayscale outline of the scene. On the right: A colored version of the same scene. The data was taken from downloaded Dragon Ball episodes and preprocessed using OpenCV to remove color and take the outlines of the drawings. Then the pre-processed and post-processed images were concatenated side by side.
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
1d40de094ece5650f7ce90d55b1711742f8c5c0b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: zhiguoxu/xlm-roberta-base-finetuned-token-clasify * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ba18bf28-7804997
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:31:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "zhiguoxu/xlm-roberta-base-finetuned-token-clasify", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.en", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:33:58+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: zhiguoxu/xlm-roberta-base-finetuned-token-clasify * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: zhiguoxu/xlm-roberta-base-finetuned-token-clasify\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: zhiguoxu/xlm-roberta-base-finetuned-token-clasify\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 92, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: zhiguoxu/xlm-roberta-base-finetuned-token-clasify\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
f639f0697aab5aa14a4179902b2ee22b971a1b7b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: transformersbook/xlm-roberta-base-finetuned-panx-en * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ba18bf28-7804998
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:31:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "transformersbook/xlm-roberta-base-finetuned-panx-en", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.en", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:33:57+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: transformersbook/xlm-roberta-base-finetuned-panx-en * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: transformersbook/xlm-roberta-base-finetuned-panx-en\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: transformersbook/xlm-roberta-base-finetuned-panx-en\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: transformersbook/xlm-roberta-base-finetuned-panx-en\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
466512ee121e61cc619c7fa5db35465b8433c181
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: moghis/xlm-roberta-base-finetuned-panx-en * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ba18bf28-7805002
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:31:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "moghis/xlm-roberta-base-finetuned-panx-en", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.en", "dataset_split": "validation", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:34:59+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: moghis/xlm-roberta-base-finetuned-panx-en * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: moghis/xlm-roberta-base-finetuned-panx-en\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: moghis/xlm-roberta-base-finetuned-panx-en\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: moghis/xlm-roberta-base-finetuned-panx-en\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
17a9449b6ce9f1e492d169d2497c7c265d2aa3db
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: jg/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815006
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:33:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "jg/xlm-roberta-base-finetuned-panx-de", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.de", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:36:00+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: jg/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: jg/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: jg/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: jg/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
9f42265eb1a9f20772ffe03ff4a7e7a55b8c0204
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: evs/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815007
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:33:20+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "evs/xlm-roberta-base-finetuned-panx-de", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.de", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:36:08+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: evs/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: evs/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: evs/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: evs/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
efeb597d3146c15f1fc6281eef33d7a605122d50
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: PdF/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815008
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:33:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "PdF/xlm-roberta-base-finetuned-panx-de", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.de", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:36:10+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: PdF/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: PdF/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: PdF/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: PdF/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
dd4f14b735b072bd0a0b82aff2ab0b99e0bb17ab
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: olpa/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815009
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:33:32+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "olpa/xlm-roberta-base-finetuned-panx-de", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.de", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:36:24+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: olpa/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: olpa/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: olpa/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: olpa/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
3b23a7d9e53e518709f2eb96ff2caf1bb72bb6c9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: naam/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815010
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:33:37+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "naam/xlm-roberta-base-finetuned-panx-de", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.de", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:36:22+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: naam/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: naam/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: naam/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 86, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: naam/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
b03beed2084b9467b559617e96d41f9fd6e20837
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: dfsj/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815011
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:33:43+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "dfsj/xlm-roberta-base-finetuned-panx-de", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.de", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:37:54+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: dfsj/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: dfsj/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: dfsj/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: dfsj/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
02715875baf1d4ef6f3143713f99c9c2ebf93351
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: edwardjross/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815012
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:33:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "edwardjross/xlm-roberta-base-finetuned-panx-de", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.de", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:39:31+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: edwardjross/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: edwardjross/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: edwardjross/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 90, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: edwardjross/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
4db862ceaeb292c34d3bde74b46eeddbef45f02e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Ninh/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-d42d3c12-7815013
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:33:54+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xtreme"], "eval_info": {"task": "entity_extraction", "model": "Ninh/xlm-roberta-base-finetuned-panx-de", "metrics": [], "dataset_name": "xtreme", "dataset_config": "PAN-X.de", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-06-27T19:38:33+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: Ninh/xlm-roberta-base-finetuned-panx-de * Dataset: xtreme To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Ninh/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Ninh/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 86, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Ninh/xlm-roberta-base-finetuned-panx-de\n* Dataset: xtreme\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
5ce8e6b412ae525c1c05ab8e23674023034480d6
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: d0r1h/LEDBill * Dataset: billsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-e1d72cd6-7845032
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:38:00+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "d0r1h/LEDBill", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}}
2022-06-28T14:48:11+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: d0r1h/LEDBill * Dataset: billsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: d0r1h/LEDBill\n* Dataset: billsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: d0r1h/LEDBill\n* Dataset: billsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 74, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: d0r1h/LEDBill\n* Dataset: billsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
80f3122d1f09cb1b052a07d5fdfddbc498b806ae
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: stevhliu/t5-small-finetuned-billsum-ca_test * Dataset: billsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-e1d72cd6-7845033
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:38:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "stevhliu/t5-small-finetuned-billsum-ca_test", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}}
2022-06-27T19:39:35+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: stevhliu/t5-small-finetuned-billsum-ca_test * Dataset: billsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: stevhliu/t5-small-finetuned-billsum-ca_test\n* Dataset: billsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: stevhliu/t5-small-finetuned-billsum-ca_test\n* Dataset: billsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: stevhliu/t5-small-finetuned-billsum-ca_test\n* Dataset: billsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
4c4af35020183a5bba67d6830073722ade33ec73
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: henryu-lin/t5-3b-samsum-deepspeed * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855034
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:43:36+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "henryu-lin/t5-3b-samsum-deepspeed", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:57:19+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: henryu-lin/t5-3b-samsum-deepspeed * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: henryu-lin/t5-3b-samsum-deepspeed\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: henryu-lin/t5-3b-samsum-deepspeed\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 82, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: henryu-lin/t5-3b-samsum-deepspeed\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
46d64fdb5afbd979e3d802606112e8826c097d10
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: henryu-lin/t5-large-samsum-deepspeed * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855035
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:43:41+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "henryu-lin/t5-large-samsum-deepspeed", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:50:39+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: henryu-lin/t5-large-samsum-deepspeed * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: henryu-lin/t5-large-samsum-deepspeed\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: henryu-lin/t5-large-samsum-deepspeed\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 82, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: henryu-lin/t5-large-samsum-deepspeed\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
b109aa5c1a272e1451a6c77f14ef71bd311eba99
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: jpcorb20/pegasus-large-reddit_tifu-samsum-256 * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855036
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:43:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "jpcorb20/pegasus-large-reddit_tifu-samsum-256", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:49:07+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: jpcorb20/pegasus-large-reddit_tifu-samsum-256 * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-256\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-256\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 89, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-256\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
ffc3f251c54bf090c63d3d021cb1878022a1545b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: jpcorb20/pegasus-large-reddit_tifu-samsum-512 * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855037
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:43:51+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "jpcorb20/pegasus-large-reddit_tifu-samsum-512", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:49:12+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: jpcorb20/pegasus-large-reddit_tifu-samsum-512 * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-512\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-512\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 89, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: jpcorb20/pegasus-large-reddit_tifu-samsum-512\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
4e1f56d62f31be7265784afbb0eb45c31ff2f527
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: santiviquez/t5-small-finetuned-samsum-en * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855038
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:43:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "santiviquez/t5-small-finetuned-samsum-en", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:44:31+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: santiviquez/t5-small-finetuned-samsum-en * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: santiviquez/t5-small-finetuned-samsum-en\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: santiviquez/t5-small-finetuned-samsum-en\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 83, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: santiviquez/t5-small-finetuned-samsum-en\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
a311f4573edeb5e54e8f7e6d6e15542ec6b62694
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: santiviquez/bart-base-finetuned-samsum-en * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855039
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:44:03+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "santiviquez/bart-base-finetuned-samsum-en", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:44:54+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: santiviquez/bart-base-finetuned-samsum-en * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: santiviquez/bart-base-finetuned-samsum-en\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: santiviquez/bart-base-finetuned-samsum-en\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 82, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: santiviquez/bart-base-finetuned-samsum-en\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
9224499116760e2d4eaf0b4eb0933b14f7934bb2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: jackieliu930/bart-large-cnn-samsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855040
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:44:10+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "jackieliu930/bart-large-cnn-samsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:47:12+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: jackieliu930/bart-large-cnn-samsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: jackieliu930/bart-large-cnn-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: jackieliu930/bart-large-cnn-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 82, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: jackieliu930/bart-large-cnn-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
703e18a9c8981908d342e8f00e24bead5cbd7bde
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: knkarthick/bart-large-xsum-samsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855041
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:44:14+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "knkarthick/bart-large-xsum-samsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:46:43+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: knkarthick/bart-large-xsum-samsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: knkarthick/bart-large-xsum-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: knkarthick/bart-large-xsum-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 81, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: knkarthick/bart-large-xsum-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
302ce85e5245d37f731bb017660ec1a90cc9e578
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: lidiya/bart-base-samsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855042
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:44:21+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "lidiya/bart-base-samsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:45:12+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: lidiya/bart-base-samsum * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: lidiya/bart-base-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: lidiya/bart-base-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 75, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: lidiya/bart-base-samsum\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
6efd9d5965f2b22f3db3345fb3a1630d8452d7f8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: santiviquez/ssr-base-finetuned-samsum-en * Dataset: samsum To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-6fbfec76-7855043
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T19:44:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "santiviquez/ssr-base-finetuned-samsum-en", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-06-27T19:46:11+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: santiviquez/ssr-base-finetuned-samsum-en * Dataset: samsum To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: santiviquez/ssr-base-finetuned-samsum-en\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: santiviquez/ssr-base-finetuned-samsum-en\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 84, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: santiviquez/ssr-base-finetuned-samsum-en\n* Dataset: samsum\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
aacbd1bc47af69a4bb8e17d90b0b0b0d185f495d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Translation * Model: Tanhim/translation-En2De * Dataset: wmt19 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-de1c01d5-7885055
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T20:02:51+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["wmt19"], "eval_info": {"task": "translation", "model": "Tanhim/translation-En2De", "metrics": [], "dataset_name": "wmt19", "dataset_config": "de-en", "dataset_split": "validation", "col_mapping": {"source": "translation.en", "target": "translation.de"}}}
2022-06-27T20:04:51+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Translation * Model: Tanhim/translation-En2De * Dataset: wmt19 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: Tanhim/translation-En2De\n* Dataset: wmt19\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: Tanhim/translation-En2De\n* Dataset: wmt19\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 75, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: Tanhim/translation-En2De\n* Dataset: wmt19\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
4c2734905dc51f98a0f5ed33ab67e0b610e240b0
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: matteopilotto/vit-base-patch16-224-in21k-snacks * Dataset: Matthijs/snacks To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-208688aa-7955063
[ "autotrain", "evaluation", "region:us" ]
2022-06-27T20:39:41+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Matthijs/snacks"], "eval_info": {"task": "image_multi_class_classification", "model": "matteopilotto/vit-base-patch16-224-in21k-snacks", "metrics": [], "dataset_name": "Matthijs/snacks", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-27T20:40:11+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: matteopilotto/vit-base-patch16-224-in21k-snacks * Dataset: Matthijs/snacks To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: matteopilotto/vit-base-patch16-224-in21k-snacks\n* Dataset: Matthijs/snacks\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: matteopilotto/vit-base-patch16-224-in21k-snacks\n* Dataset: Matthijs/snacks\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 95, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: matteopilotto/vit-base-patch16-224-in21k-snacks\n* Dataset: Matthijs/snacks\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
9482331999932bff7136d6a16f6d723646c140f3
<img src="data/paranames_banner.png"></img> # ParaNames: A multilingual resource for parallel names This repository contains releases for the ParaNames corpus, consisting of parallel names of over 12 million named entities in over 400 languages. ParaNames was introduced in [Sälevä, J. and Lignos, C., 2022. ParaNames: A Massively Multilingual Entity Name Corpus. arXiv preprint arXiv:2202.14035](https://arxiv.org/abs/2202.14035). Please cite as: ``` @article{saleva2022paranames, title={ParaNames: A Massively Multilingual Entity Name Corpus}, author={S{\"a}lev{\"a}, Jonne and Lignos, Constantine}, journal={arXiv preprint arXiv:2202.14035}, year={2022} } ``` See the [Releases page](https://github.com/bltlab/paranames/releases) for the downloadable release. # Using the data release ## Release format The corpus is released as a gzipped TSV file which is produced by the pipeline included in this repository. ## Release notes ### Repeated entities In current releases, any entity that is associated with multiple named entity types (PER, LOC, ORG) in the Wikidata type hierarchy will appear multiple times in the output, once with each type. This affects less than 3% of the entities in the data. If you want a unique set of entities, you should deduplicate the data using the `wikidata_id` field. If you only want to use entities that are associated with a single named entity type, you should remove any `wikidata_id` that appears in multiple rows. # Using the code First, install the following non-Python dependencies: - MongoDB - [xsv](https://github.com/BurntSushi/xsv) - ICU support for your computer (e.g. `libicu-dev`) Next, install ParaNames and its Python dependencies by running `pip install -e .`. It is recommended that you use a Conda environment for package management. ## Creating the ParaNames corpus To create a corpus following our approach, follow the steps below: 1. Download the latest Wikidata dump from the [Wikimedia page](https://dumps.wikimedia.org/wikidatawiki/entities/) and extract it. Note that this may take up several TB of disk space. 2. Use `recipes/paranames_pipeline.sh` which ingests the Wikidata JSON to MongoDB and then dumps and postprocesses it to our final TSV resource. The call to `recipes/paranames_pipeline.sh` works as follows: ``` recipes/paranames_pipeline.sh <path_to_extracted_json_dump> <output_folder> <n_workers> ``` Set the number of workers based on the number of CPUs your machine has. By default, only 1 CPU is used. The output folder will contain one subfolder per language, inside of which `paranames_<language_code>.tsv` can be found. The entire resource is located in `<output_folder>/combined/paranames.tsv`. ### Notes ParaNames offers several options for customization: - If your MongoDB instance uses a non-standard port, you should change the value of [`mongodb_port`](https://github.com/bltlab/paranames/blob/main/recipes/paranames_pipeline.sh#L13) accordingly inside `paranames_pipeline.sh`. - Setting [`should_collapse_languages=yes`](https://github.com/bltlab/paranames/blob/main/recipes/dump.sh#L17) will cause Wikimedia language codes to be "collapsed" to the top-level Wikimedia language code, i.e. `kk-cyrl` will be converted to `kk`, `en-ca` to `en` etc. - Setting [`should_keep_intermediate_files=yes`](https://github.com/bltlab/paranames/blob/main/recipes/dump.sh#L18) will cause intermediate files to be deleted. This includes the raw per-type TSV dumps (`{PER,LOC,ORG}.tsv`) from MongoDB, as well as outputs of `postprocess.py`. - Within [`recipes/dump.sh`](https://github.com/bltlab/paranames/blob/main/recipes/dump.sh), it is also possible to define languages to be excluded and whether entity types should be disambiguated. By default, no languages are excluded and no disambiguation is done. - After the pipeline completes, `<output_folder>` will contain one folder per language, inside of which is a TSV file containing the subset of names in that language. Combined TSVs with names in all languages are available in the `combined` folder.
imvladikon/paranames
[ "arxiv:2202.14035", "region:us" ]
2022-06-27T21:33:50+00:00
{}
2023-01-13T07:16:15+00:00
[ "2202.14035" ]
[]
TAGS #arxiv-2202.14035 #region-us
<img src="data/paranames_banner.png"></img> # ParaNames: A multilingual resource for parallel names This repository contains releases for the ParaNames corpus, consisting of parallel names of over 12 million named entities in over 400 languages. ParaNames was introduced in Sälevä, J. and Lignos, C., 2022. ParaNames: A Massively Multilingual Entity Name Corpus. arXiv preprint arXiv:2202.14035. Please cite as: See the Releases page for the downloadable release. # Using the data release ## Release format The corpus is released as a gzipped TSV file which is produced by the pipeline included in this repository. ## Release notes ### Repeated entities In current releases, any entity that is associated with multiple named entity types (PER, LOC, ORG) in the Wikidata type hierarchy will appear multiple times in the output, once with each type. This affects less than 3% of the entities in the data. If you want a unique set of entities, you should deduplicate the data using the 'wikidata_id' field. If you only want to use entities that are associated with a single named entity type, you should remove any 'wikidata_id' that appears in multiple rows. # Using the code First, install the following non-Python dependencies: - MongoDB - xsv - ICU support for your computer (e.g. 'libicu-dev') Next, install ParaNames and its Python dependencies by running 'pip install -e .'. It is recommended that you use a Conda environment for package management. ## Creating the ParaNames corpus To create a corpus following our approach, follow the steps below: 1. Download the latest Wikidata dump from the Wikimedia page and extract it. Note that this may take up several TB of disk space. 2. Use 'recipes/paranames_pipeline.sh' which ingests the Wikidata JSON to MongoDB and then dumps and postprocesses it to our final TSV resource. The call to 'recipes/paranames_pipeline.sh' works as follows: Set the number of workers based on the number of CPUs your machine has. By default, only 1 CPU is used. The output folder will contain one subfolder per language, inside of which 'paranames_<language_code>.tsv' can be found. The entire resource is located in '<output_folder>/combined/URL'. ### Notes ParaNames offers several options for customization: - If your MongoDB instance uses a non-standard port, you should change the value of 'mongodb_port' accordingly inside 'paranames_pipeline.sh'. - Setting 'should_collapse_languages=yes' will cause Wikimedia language codes to be "collapsed" to the top-level Wikimedia language code, i.e. 'kk-cyrl' will be converted to 'kk', 'en-ca' to 'en' etc. - Setting 'should_keep_intermediate_files=yes' will cause intermediate files to be deleted. This includes the raw per-type TSV dumps ('{PER,LOC,ORG}.tsv') from MongoDB, as well as outputs of 'URL'. - Within 'recipes/URL', it is also possible to define languages to be excluded and whether entity types should be disambiguated. By default, no languages are excluded and no disambiguation is done. - After the pipeline completes, '<output_folder>' will contain one folder per language, inside of which is a TSV file containing the subset of names in that language. Combined TSVs with names in all languages are available in the 'combined' folder.
[ "# ParaNames: A multilingual resource for parallel names\n\nThis repository contains releases for the ParaNames corpus, consisting of parallel names of over 12 million named entities in over 400 languages.\n\nParaNames was introduced in Sälevä, J. and Lignos, C., 2022. ParaNames: A Massively Multilingual Entity Name Corpus. arXiv preprint arXiv:2202.14035.\n\nPlease cite as:\n\n\nSee the Releases page for the downloadable release.", "# Using the data release", "## Release format\n\nThe corpus is released as a gzipped TSV file which is produced by the pipeline included in this repository.", "## Release notes", "### Repeated entities\n\nIn current releases, any entity that is associated with multiple named entity types (PER, LOC, ORG) in the Wikidata type hierarchy will appear multiple times in the output, once with each type. This affects less than 3% of the entities in the data.\n\nIf you want a unique set of entities, you should deduplicate the data using the 'wikidata_id' field.\n\nIf you only want to use entities that are associated with a single named entity type, you should remove any 'wikidata_id' that appears in multiple rows.", "# Using the code\n\nFirst, install the following non-Python dependencies:\n\n- MongoDB\n- xsv\n- ICU support for your computer (e.g. 'libicu-dev')\n\nNext, install ParaNames and its Python dependencies by running 'pip install -e .'.\n\nIt is recommended that you use a Conda environment for package management.", "## Creating the ParaNames corpus\n\nTo create a corpus following our approach, follow the steps below:\n\n1. Download the latest Wikidata dump from the Wikimedia page and extract it. Note that this may take up several TB of disk space.\n2. Use 'recipes/paranames_pipeline.sh' which ingests the Wikidata JSON to MongoDB and then dumps and postprocesses it to our final TSV resource.\n\nThe call to 'recipes/paranames_pipeline.sh' works as follows:\n\n\n\nSet the number of workers based on the number of CPUs your machine has.\nBy default, only 1 CPU is used.\n\nThe output folder will contain one subfolder per language, inside of which 'paranames_<language_code>.tsv' can be found.\nThe entire resource is located in '<output_folder>/combined/URL'.", "### Notes\n\n\nParaNames offers several options for customization:\n\n- If your MongoDB instance uses a non-standard port, you should change the value of 'mongodb_port' accordingly inside 'paranames_pipeline.sh'.\n\n- Setting 'should_collapse_languages=yes' will cause Wikimedia language codes to be \"collapsed\" to the top-level Wikimedia language code, i.e. 'kk-cyrl' will be converted to 'kk', 'en-ca' to 'en' etc.\n\n- Setting 'should_keep_intermediate_files=yes' will cause intermediate files to be deleted. This includes the raw per-type TSV dumps ('{PER,LOC,ORG}.tsv') from MongoDB, as well as outputs of 'URL'.\n\n- Within 'recipes/URL', it is also possible to define languages to be excluded and whether entity types should be disambiguated. By default, no languages are excluded and no disambiguation is done.\n\n- After the pipeline completes, '<output_folder>' will contain one folder per language, inside of which is a TSV file containing the subset of names in that language. Combined TSVs with names in all languages are available in the 'combined' folder." ]
[ "TAGS\n#arxiv-2202.14035 #region-us \n", "# ParaNames: A multilingual resource for parallel names\n\nThis repository contains releases for the ParaNames corpus, consisting of parallel names of over 12 million named entities in over 400 languages.\n\nParaNames was introduced in Sälevä, J. and Lignos, C., 2022. ParaNames: A Massively Multilingual Entity Name Corpus. arXiv preprint arXiv:2202.14035.\n\nPlease cite as:\n\n\nSee the Releases page for the downloadable release.", "# Using the data release", "## Release format\n\nThe corpus is released as a gzipped TSV file which is produced by the pipeline included in this repository.", "## Release notes", "### Repeated entities\n\nIn current releases, any entity that is associated with multiple named entity types (PER, LOC, ORG) in the Wikidata type hierarchy will appear multiple times in the output, once with each type. This affects less than 3% of the entities in the data.\n\nIf you want a unique set of entities, you should deduplicate the data using the 'wikidata_id' field.\n\nIf you only want to use entities that are associated with a single named entity type, you should remove any 'wikidata_id' that appears in multiple rows.", "# Using the code\n\nFirst, install the following non-Python dependencies:\n\n- MongoDB\n- xsv\n- ICU support for your computer (e.g. 'libicu-dev')\n\nNext, install ParaNames and its Python dependencies by running 'pip install -e .'.\n\nIt is recommended that you use a Conda environment for package management.", "## Creating the ParaNames corpus\n\nTo create a corpus following our approach, follow the steps below:\n\n1. Download the latest Wikidata dump from the Wikimedia page and extract it. Note that this may take up several TB of disk space.\n2. Use 'recipes/paranames_pipeline.sh' which ingests the Wikidata JSON to MongoDB and then dumps and postprocesses it to our final TSV resource.\n\nThe call to 'recipes/paranames_pipeline.sh' works as follows:\n\n\n\nSet the number of workers based on the number of CPUs your machine has.\nBy default, only 1 CPU is used.\n\nThe output folder will contain one subfolder per language, inside of which 'paranames_<language_code>.tsv' can be found.\nThe entire resource is located in '<output_folder>/combined/URL'.", "### Notes\n\n\nParaNames offers several options for customization:\n\n- If your MongoDB instance uses a non-standard port, you should change the value of 'mongodb_port' accordingly inside 'paranames_pipeline.sh'.\n\n- Setting 'should_collapse_languages=yes' will cause Wikimedia language codes to be \"collapsed\" to the top-level Wikimedia language code, i.e. 'kk-cyrl' will be converted to 'kk', 'en-ca' to 'en' etc.\n\n- Setting 'should_keep_intermediate_files=yes' will cause intermediate files to be deleted. This includes the raw per-type TSV dumps ('{PER,LOC,ORG}.tsv') from MongoDB, as well as outputs of 'URL'.\n\n- Within 'recipes/URL', it is also possible to define languages to be excluded and whether entity types should be disambiguated. By default, no languages are excluded and no disambiguation is done.\n\n- After the pipeline completes, '<output_folder>' will contain one folder per language, inside of which is a TSV file containing the subset of names in that language. Combined TSVs with names in all languages are available in the 'combined' folder." ]
[ 14, 113, 6, 29, 3, 135, 81, 198, 311 ]
[ "passage: TAGS\n#arxiv-2202.14035 #region-us \n# ParaNames: A multilingual resource for parallel names\n\nThis repository contains releases for the ParaNames corpus, consisting of parallel names of over 12 million named entities in over 400 languages.\n\nParaNames was introduced in Sälevä, J. and Lignos, C., 2022. ParaNames: A Massively Multilingual Entity Name Corpus. arXiv preprint arXiv:2202.14035.\n\nPlease cite as:\n\n\nSee the Releases page for the downloadable release.# Using the data release## Release format\n\nThe corpus is released as a gzipped TSV file which is produced by the pipeline included in this repository.## Release notes### Repeated entities\n\nIn current releases, any entity that is associated with multiple named entity types (PER, LOC, ORG) in the Wikidata type hierarchy will appear multiple times in the output, once with each type. This affects less than 3% of the entities in the data.\n\nIf you want a unique set of entities, you should deduplicate the data using the 'wikidata_id' field.\n\nIf you only want to use entities that are associated with a single named entity type, you should remove any 'wikidata_id' that appears in multiple rows.# Using the code\n\nFirst, install the following non-Python dependencies:\n\n- MongoDB\n- xsv\n- ICU support for your computer (e.g. 'libicu-dev')\n\nNext, install ParaNames and its Python dependencies by running 'pip install -e .'.\n\nIt is recommended that you use a Conda environment for package management." ]
48a5b799dc8383799683c5c2b7ae466a103ac896
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: aspis/swin-finetuned-food101 * Dataset: food101 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-57377e87-7975067
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:10:18+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["food101"], "eval_info": {"task": "image_multi_class_classification", "model": "aspis/swin-finetuned-food101", "metrics": [], "dataset_name": "food101", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-28T00:17:36+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: aspis/swin-finetuned-food101 * Dataset: food101 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: aspis/swin-finetuned-food101\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: aspis/swin-finetuned-food101\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 81, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: aspis/swin-finetuned-food101\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
5f49b13677db6758bfebaa528e8840904682b79b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: eslamxm/vit-base-food101 * Dataset: food101 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-57377e87-7975068
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:10:19+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["food101"], "eval_info": {"task": "image_multi_class_classification", "model": "eslamxm/vit-base-food101", "metrics": [], "dataset_name": "food101", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-28T00:17:37+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: eslamxm/vit-base-food101 * Dataset: food101 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: eslamxm/vit-base-food101\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: eslamxm/vit-base-food101\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 80, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: eslamxm/vit-base-food101\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
96d5cc4fbeae4c051a01e0735e644be386327f60
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/food * Dataset: food101 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-57377e87-7975069
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:10:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["food101"], "eval_info": {"task": "image_multi_class_classification", "model": "nateraw/food", "metrics": [], "dataset_name": "food101", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-28T00:17:06+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/food * Dataset: food101 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/food\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/food\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 74, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/food\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
f48b247feff57496a65d5158f4ed6996d1588300
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: skylord/swin-finetuned-food101 * Dataset: food101 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-57377e87-7975070
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:10:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["food101"], "eval_info": {"task": "image_multi_class_classification", "model": "skylord/swin-finetuned-food101", "metrics": [], "dataset_name": "food101", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"image": "image", "target": "label"}}}
2022-06-28T00:17:38+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: skylord/swin-finetuned-food101 * Dataset: food101 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: skylord/swin-finetuned-food101\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: skylord/swin-finetuned-food101\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 82, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: skylord/swin-finetuned-food101\n* Dataset: food101\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
fc64cba2a6607951c08b67a7b744e552e5c654c8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: eugenecamus/resnet-50-base-beans-demo * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985071
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:11:21+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "eugenecamus/resnet-50-base-beans-demo", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:11:48+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: eugenecamus/resnet-50-base-beans-demo * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: eugenecamus/resnet-50-base-beans-demo\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: eugenecamus/resnet-50-base-beans-demo\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 84, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: eugenecamus/resnet-50-base-beans-demo\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
12c2c692f03c2fb5c5be9034663ad1363cc7f37f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: johnnydevriese/vit_beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985072
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:11:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "johnnydevriese/vit_beans", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:11:59+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: johnnydevriese/vit_beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: johnnydevriese/vit_beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: johnnydevriese/vit_beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 80, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: johnnydevriese/vit_beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
2eeab49dc8b7db70b1ee4b0b9294e1e2652a703e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: karthiksv/vit-base-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985073
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:11:32+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "karthiksv/vit-base-beans", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:12:06+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: karthiksv/vit-base-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 79, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
8e31af387a31ba9cdcf2b804b8d5ca2e550887b7
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: mrm8488/convnext-tiny-finetuned-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985074
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:11:38+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "mrm8488/convnext-tiny-finetuned-beans", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:12:04+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: mrm8488/convnext-tiny-finetuned-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: mrm8488/convnext-tiny-finetuned-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: mrm8488/convnext-tiny-finetuned-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 85, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: mrm8488/convnext-tiny-finetuned-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
a9ed2ae2efa9eb7dddbdaeff5f2a5db735d64eee
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/vit-base-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985075
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:11:43+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "nateraw/vit-base-beans", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:12:11+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/vit-base-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 79, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
cf9d2e4eb271e58f2e02a34c04385072e52842dd
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/vit-base-beans-demo * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985076
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:11:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "nateraw/vit-base-beans-demo", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:12:17+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/vit-base-beans-demo * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans-demo\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans-demo\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 81, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans-demo\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
db742cfe5ee4f817a0bdbcb52f5fcfc6370bd9b5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/vit-base-beans-demo-v2 * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985077
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:11:56+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "nateraw/vit-base-beans-demo-v2", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:12:20+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/vit-base-beans-demo-v2 * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans-demo-v2\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans-demo-v2\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 84, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans-demo-v2\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
da9bcb8e227d0a2855c127640c55a03f20d6e114
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/vit-base-beans-demo-v3 * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985078
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:12:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "nateraw/vit-base-beans-demo-v3", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:12:28+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: nateraw/vit-base-beans-demo-v3 * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans-demo-v3\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans-demo-v3\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 84, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nateraw/vit-base-beans-demo-v3\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
f6998feed23737040025f847ea8e2644da8e09ce
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: saiharsha/vit-base-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985080
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:12:18+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "saiharsha/vit-base-beans", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:12:52+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: saiharsha/vit-base-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: saiharsha/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: saiharsha/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 79, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: saiharsha/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
27054856d50749cc3b9090ca90b28a58bd383ac2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: nickmuchi/vit-base-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-ac4402f5-7985079
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:12:37+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["beans"], "eval_info": {"task": "image_multi_class_classification", "model": "nickmuchi/vit-base-beans", "metrics": [], "dataset_name": "beans", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"image": "image", "target": "labels"}}}
2022-06-28T00:13:10+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: nickmuchi/vit-base-beans * Dataset: beans To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nickmuchi/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nickmuchi/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 79, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: nickmuchi/vit-base-beans\n* Dataset: beans\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
c45f20dd3fd0a6828525afffe050f6cee9739286
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-5480d71b-7995081
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:16:55+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "aaraki/vit-base-patch16-224-in21k-finetuned-cifar10", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-28T00:17:58+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: aaraki/vit-base-patch16-224-in21k-finetuned-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: aaraki/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: aaraki/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 94, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: aaraki/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
dfd6372de9860d27f25f4f57c685787d5688364e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/autotrain_cifar10_vit_base * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-5480d71b-7995082
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:16:59+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "abhishek/autotrain_cifar10_vit_base", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-28T00:17:59+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: abhishek/autotrain_cifar10_vit_base * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/autotrain_cifar10_vit_base\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/autotrain_cifar10_vit_base\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 86, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: abhishek/autotrain_cifar10_vit_base\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
44aa08f5bf0ef2af1fd0faf041c815bfed67248e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: jadohu/BEiT-finetuned * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-5480d71b-7995084
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:17:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "jadohu/BEiT-finetuned", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-28T00:18:12+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: jadohu/BEiT-finetuned * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: jadohu/BEiT-finetuned\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: jadohu/BEiT-finetuned\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 81, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: jadohu/BEiT-finetuned\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
64a6340f2f15d378dc42a50b312714a713cb2f6a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: karthiksv/vit-base-patch16-224-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-5480d71b-7995085
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:17:16+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "karthiksv/vit-base-patch16-224-cifar10", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-28T00:18:16+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: karthiksv/vit-base-patch16-224-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-patch16-224-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-patch16-224-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 86, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-patch16-224-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
57848a71566164eb2818851a09ffe5373cfbbd87
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-5480d71b-7995086
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:17:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-28T00:18:31+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 94, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: karthiksv/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
6ddee40ebb1ba898a18f2613d0e9669babd3aee1
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: michaelbenayoun/vit-base-beans * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-5480d71b-7995087
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:17:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "michaelbenayoun/vit-base-beans", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-28T00:18:39+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: michaelbenayoun/vit-base-beans * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: michaelbenayoun/vit-base-beans\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: michaelbenayoun/vit-base-beans\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 83, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: michaelbenayoun/vit-base-beans\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
deead74447b2beae48f22d348f9c7ebd2865b661
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Image Classification * Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-5480d71b-7995089
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T00:17:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cifar10"], "eval_info": {"task": "image_multi_class_classification", "model": "tanlq/vit-base-patch16-224-in21k-finetuned-cifar10", "metrics": [], "dataset_name": "cifar10", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"image": "img", "target": "label"}}}
2022-06-28T00:18:50+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Image Classification * Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10 * Dataset: cifar10 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 94, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Image Classification\n* Model: tanlq/vit-base-patch16-224-in21k-finetuned-cifar10\n* Dataset: cifar10\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
187bd9f5e786d80f64b3d372386e330ae36d8488
# Dataset Card for Taskmaster-1 - **Repository:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019 - **Paper:** https://arxiv.org/pdf/1909.05358.pdf - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('tm1') ontology = load_ontology('tm1') database = load_database('tm1') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary The original dataset consists of 13,215 task-based dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. - **How to get the transformed data from original data:** - Download [master.zip](https://github.com/google-research-datasets/Taskmaster/archive/refs/heads/master.zip). - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - Remove dialogs that are empty or only contain one speaker. - Split woz-dialogs into train/validation/test randomly (8:1:1). The split of self-dialogs is followed the original dataset. - Merge continuous turns by the same speaker (ignore repeated turns). - Annotate `dialogue acts` according to the original segment annotations. Add `intent` annotation (inform/accept/reject). The type of `dialogue act` is set to `non-categorical` if the original segment annotation includes a specified `slot`. Otherwise, the type is set to `binary` (and the `slot` and `value` are empty) since it means general reference to a transaction, e.g. "OK your pizza has been ordered". If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation. - Add `domain`, `intent`, and `slot` descriptions. - Add `state` by accumulate `non-categorical dialogue acts` in the order that they appear, except those whose intents are **reject**. - Keep the first annotation since each conversation was annotated by two workers. - **Annotations:** - dialogue acts, state. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 10535 | 223322 | 21.2 | 8.75 | 1 | - | - | - | 100 | | validation | 1318 | 27903 | 21.17 | 8.75 | 1 | - | - | - | 100 | | test | 1322 | 27660 | 20.92 | 8.87 | 1 | - | - | - | 100 | | all | 13175 | 278885 | 21.17 | 8.76 | 1 | - | - | - | 100 | 6 domains: ['uber_lyft', 'movie_ticket', 'restaurant_reservation', 'coffee_ordering', 'pizza_ordering', 'auto_repair'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @inproceedings{byrne-etal-2019-taskmaster, title = {Taskmaster-1:Toward a Realistic and Diverse Dialog Dataset}, author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik}, booktitle = {2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing}, address = {Hong Kong}, year = {2019} } ``` ### Licensing Information [**CC BY 4.0**](https://creativecommons.org/licenses/by/4.0/)
ConvLab/tm1
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "arxiv:1909.05358", "region:us" ]
2022-06-28T00:31:11+00:00
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "Taskmaster-1"}
2022-11-25T09:13:02+00:00
[ "1909.05358" ]
[ "en" ]
TAGS #task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-1909.05358 #region-us
Dataset Card for Taskmaster-1 ============================= * Repository: URL * Paper: URL * Leaderboard: None * Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via: For more usage please refer to here. ### Dataset Summary The original dataset consists of 13,215 task-based dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations. * How to get the transformed data from original data: + Download URL. + Run 'python URL' in the current directory. * Main changes of the transformation: + Remove dialogs that are empty or only contain one speaker. + Split woz-dialogs into train/validation/test randomly (8:1:1). The split of self-dialogs is followed the original dataset. + Merge continuous turns by the same speaker (ignore repeated turns). + Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation (inform/accept/reject). The type of 'dialogue act' is set to 'non-categorical' if the original segment annotation includes a specified 'slot'. Otherwise, the type is set to 'binary' (and the 'slot' and 'value' are empty) since it means general reference to a transaction, e.g. "OK your pizza has been ordered". If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation. + Add 'domain', 'intent', and 'slot' descriptions. + Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear, except those whose intents are reject. + Keep the first annotation since each conversation was annotated by two workers. * Annotations: + dialogue acts, state. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG ### Languages English ### Data Splits 6 domains: ['uber\_lyft', 'movie\_ticket', 'restaurant\_reservation', 'coffee\_ordering', 'pizza\_ordering', 'auto\_repair'] * cat slot match: how many values of categorical slots are in the possible values of ontology in percentage. * non-cat slot span: how many values of non-categorical slots have span annotation in percentage. ### Licensing Information CC BY 4.0
[ "### Dataset Summary\n\n\nThe original dataset consists of 13,215 task-based dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Remove dialogs that are empty or only contain one speaker.\n\t+ Split woz-dialogs into train/validation/test randomly (8:1:1). The split of self-dialogs is followed the original dataset.\n\t+ Merge continuous turns by the same speaker (ignore repeated turns).\n\t+ Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation (inform/accept/reject). The type of 'dialogue act' is set to 'non-categorical' if the original segment annotation includes a specified 'slot'. Otherwise, the type is set to 'binary' (and the 'slot' and 'value' are empty) since it means general reference to a transaction, e.g. \"OK your pizza has been ordered\". If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.\n\t+ Add 'domain', 'intent', and 'slot' descriptions.\n\t+ Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear, except those whose intents are reject.\n\t+ Keep the first annotation since each conversation was annotated by two workers.\n* Annotations:\n\t+ dialogue acts, state.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n6 domains: ['uber\\_lyft', 'movie\\_ticket', 'restaurant\\_reservation', 'coffee\\_ordering', 'pizza\\_ordering', 'auto\\_repair']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY 4.0" ]
[ "TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-1909.05358 #region-us \n", "### Dataset Summary\n\n\nThe original dataset consists of 13,215 task-based dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Remove dialogs that are empty or only contain one speaker.\n\t+ Split woz-dialogs into train/validation/test randomly (8:1:1). The split of self-dialogs is followed the original dataset.\n\t+ Merge continuous turns by the same speaker (ignore repeated turns).\n\t+ Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation (inform/accept/reject). The type of 'dialogue act' is set to 'non-categorical' if the original segment annotation includes a specified 'slot'. Otherwise, the type is set to 'binary' (and the 'slot' and 'value' are empty) since it means general reference to a transaction, e.g. \"OK your pizza has been ordered\". If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.\n\t+ Add 'domain', 'intent', and 'slot' descriptions.\n\t+ Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear, except those whose intents are reject.\n\t+ Keep the first annotation since each conversation was annotated by two workers.\n* Annotations:\n\t+ dialogue acts, state.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n6 domains: ['uber\\_lyft', 'movie\\_ticket', 'restaurant\\_reservation', 'coffee\\_ordering', 'pizza\\_ordering', 'auto\\_repair']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY 4.0" ]
[ 57, 422, 20, 5, 109, 9 ]
[ "passage: TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-1909.05358 #region-us \n### Dataset Summary\n\n\nThe original dataset consists of 13,215 task-based dialogs, including 5,507 spoken and 7,708 written dialogs created with two distinct procedures. Each conversation falls into one of six domains: ordering pizza, creating auto repair appointments, setting up ride service, ordering movie tickets, ordering coffee drinks and making restaurant reservations.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Remove dialogs that are empty or only contain one speaker.\n\t+ Split woz-dialogs into train/validation/test randomly (8:1:1). The split of self-dialogs is followed the original dataset.\n\t+ Merge continuous turns by the same speaker (ignore repeated turns).\n\t+ Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation (inform/accept/reject). The type of 'dialogue act' is set to 'non-categorical' if the original segment annotation includes a specified 'slot'. Otherwise, the type is set to 'binary' (and the 'slot' and 'value' are empty) since it means general reference to a transaction, e.g. \"OK your pizza has been ordered\". If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.\n\t+ Add 'domain', 'intent', and 'slot' descriptions.\n\t+ Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear, except those whose intents are reject.\n\t+ Keep the first annotation since each conversation was annotated by two workers.\n* Annotations:\n\t+ dialogue acts, state.### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG### Languages\n\n\nEnglish" ]
896fbf043f8a7a315f42a9855093e9889d13a006
# Dataset Card for WOZ 2.0 - **Repository:** https://github.com/nmrksic/neural-belief-tracker/tree/master/data/woz - **Paper:** https://aclanthology.org/P17-1163.pdf - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('woz') ontology = load_ontology('woz') database = load_database('woz') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary Describe the dataset. - **How to get the transformed data from original data:** - download `woz_[train|validate|test]_en.json` from https://github.com/nmrksic/neural-belief-tracker/tree/master/data/woz and save to `woz` dir in the current directory. - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - domain is set to **restaurant**. - normalize the value of categorical slots in state and dialogue acts. - `belief_states` in WOZ dataset contains `request` intents, which are ignored in processing. - use simple string match to find value spans of non-categorical slots. - **Annotations:** - User dialogue acts, state ### Supported Tasks and Leaderboards NLU, DST, E2E ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 600 | 4472 | 7.45 | 11.37 | 1 | 100 | - | 100 | 96.56 | | validation | 200 | 1460 | 7.3 | 11.28 | 1 | 100 | - | 100 | 95.52 | | test | 400 | 2892 | 7.23 | 11.49 | 1 | 100 | - | 100 | 94.83 | | all | 1200 | 8824 | 7.35 | 11.39 | 1 | 100 | - | 100 | 95.83 | 1 domains: ['restaurant'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @inproceedings{mrksic-etal-2017-neural, title = "Neural Belief Tracker: Data-Driven Dialogue State Tracking", author = "Mrk{\v{s}}i{\'c}, Nikola and {\'O} S{\'e}aghdha, Diarmuid and Wen, Tsung-Hsien and Thomson, Blaise and Young, Steve", booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2017", address = "Vancouver, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P17-1163", doi = "10.18653/v1/P17-1163", pages = "1777--1788", } ``` ### Licensing Information Apache License, Version 2.0
ConvLab/woz
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "region:us" ]
2022-06-28T00:42:52+00:00
{"language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["conversational"], "pretty_name": "WOZ 2.0"}
2022-11-25T09:17:30+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us
Dataset Card for WOZ 2.0 ======================== * Repository: URL * Paper: URL * Leaderboard: None * Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via: For more usage please refer to here. ### Dataset Summary Describe the dataset. * How to get the transformed data from original data: + download 'woz\_[train|validate|test]\_en.json' from URL and save to 'woz' dir in the current directory. + Run 'python URL' in the current directory. * Main changes of the transformation: + domain is set to restaurant. + normalize the value of categorical slots in state and dialogue acts. + 'belief\_states' in WOZ dataset contains 'request' intents, which are ignored in processing. + use simple string match to find value spans of non-categorical slots. * Annotations: + User dialogue acts, state ### Supported Tasks and Leaderboards NLU, DST, E2E ### Languages English ### Data Splits 1 domains: ['restaurant'] * cat slot match: how many values of categorical slots are in the possible values of ontology in percentage. * non-cat slot span: how many values of non-categorical slots have span annotation in percentage. ### Licensing Information Apache License, Version 2.0
[ "### Dataset Summary\n\n\nDescribe the dataset.\n\n\n* How to get the transformed data from original data:\n\n\n\t+ download 'woz\\_[train|validate|test]\\_en.json' from URL and save to 'woz' dir in the current directory.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\n\n\t+ domain is set to restaurant.\n\t+ normalize the value of categorical slots in state and dialogue acts.\n\t+ 'belief\\_states' in WOZ dataset contains 'request' intents, which are ignored in processing.\n\t+ use simple string match to find value spans of non-categorical slots.\n* Annotations:\n\n\n\t+ User dialogue acts, state", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, E2E", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n1 domains: ['restaurant']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nApache License, Version 2.0" ]
[ "TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n", "### Dataset Summary\n\n\nDescribe the dataset.\n\n\n* How to get the transformed data from original data:\n\n\n\t+ download 'woz\\_[train|validate|test]\\_en.json' from URL and save to 'woz' dir in the current directory.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\n\n\t+ domain is set to restaurant.\n\t+ normalize the value of categorical slots in state and dialogue acts.\n\t+ 'belief\\_states' in WOZ dataset contains 'request' intents, which are ignored in processing.\n\t+ use simple string match to find value spans of non-categorical slots.\n* Annotations:\n\n\n\t+ User dialogue acts, state", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, E2E", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n1 domains: ['restaurant']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nApache License, Version 2.0" ]
[ 48, 167, 19, 5, 61, 12 ]
[ "passage: TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n### Dataset Summary\n\n\nDescribe the dataset.\n\n\n* How to get the transformed data from original data:\n\n\n\t+ download 'woz\\_[train|validate|test]\\_en.json' from URL and save to 'woz' dir in the current directory.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\n\n\t+ domain is set to restaurant.\n\t+ normalize the value of categorical slots in state and dialogue acts.\n\t+ 'belief\\_states' in WOZ dataset contains 'request' intents, which are ignored in processing.\n\t+ use simple string match to find value spans of non-categorical slots.\n* Annotations:\n\n\n\t+ User dialogue acts, state### Supported Tasks and Leaderboards\n\n\nNLU, DST, E2E### Languages\n\n\nEnglish### Data Splits\n\n\n\n1 domains: ['restaurant']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.### Licensing Information\n\n\nApache License, Version 2.0" ]
ee6eb00020857615a5a4f86b6972287b33417756
# Dataset Card for Camrest - **Repository:** https://www.repository.cam.ac.uk/handle/1810/260970 - **Paper:** https://aclanthology.org/D16-1233/ - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('camrest') ontology = load_ontology('camrest') database = load_database('camrest') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary Cambridge restaurant dialogue domain dataset collected for developing neural network based dialogue systems. The two papers published based on this dataset are: 1. A Network-based End-to-End Trainable Task-oriented Dialogue System 2. Conditional Generation and Snapshot Learning in Neural Dialogue Systems. The dataset was collected based on the Wizard of Oz experiment on Amazon MTurk. Each dialogue contains a goal label and several exchanges between a customer and the system. Each user turn was labelled by a set of slot-value pairs representing a coarse representation of dialogue state (`slu` field). There are in total 676 dialogue, in which most of the dialogues are finished but some of dialogues were not. - **How to get the transformed data from original data:** - Run `python preprocess.py` in the current directory. Need `../../camrest/` as the original data. - **Main changes of the transformation:** - Add dialogue act annotation according to the state change. This step was done by ConvLab-2 and we use the processed dialog acts here. - Rename `pricerange` to `price range` - Add character level span annotation for non-categorical slots. - **Annotations:** - user goal, dialogue acts, state. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG, E2E, User simulator ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | | ---------- | --------- | ---------- | ------- | ---------- | ----------- | --------------------- | -------------------- | ---------------------------- | ------------------------------- | | train | 406 | 3342 | 8.23 | 10.6 | 1 | 100 | 100 | 100 | 99.83 | | validation | 135 | 1076 | 7.97 | 11.26 | 1 | 100 | 100 | 100 | 100 | | test | 135 | 1070 | 7.93 | 11.01 | 1 | 100 | 100 | 100 | 100 | | all | 676 | 5488 | 8.12 | 10.81 | 1 | 100 | 100 | 100 | 99.9 | 1 domains: ['restaurant'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @inproceedings{wen-etal-2016-conditional, title = "Conditional Generation and Snapshot Learning in Neural Dialogue Systems", author = "Wen, Tsung-Hsien and Ga{\v{s}}i{\'c}, Milica and Mrk{\v{s}}i{\'c}, Nikola and Rojas-Barahona, Lina M. and Su, Pei-Hao and Ultes, Stefan and Vandyke, David and Young, Steve", booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2016", address = "Austin, Texas", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D16-1233", doi = "10.18653/v1/D16-1233", pages = "2153--2162", } ``` ### Licensing Information [**CC BY 4.0**](https://creativecommons.org/licenses/by/4.0/)
ConvLab/camrest
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:n<1K", "language:en", "license:cc-by-4.0", "region:us" ]
2022-06-28T00:45:51+00:00
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["conversational"], "pretty_name": "Camrest"}
2022-11-25T09:03:27+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-4.0 #region-us
Dataset Card for Camrest ======================== * Repository: URL * Paper: URL * Leaderboard: None * Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via: For more usage please refer to here. ### Dataset Summary Cambridge restaurant dialogue domain dataset collected for developing neural network based dialogue systems. The two papers published based on this dataset are: 1. A Network-based End-to-End Trainable Task-oriented Dialogue System 2. Conditional Generation and Snapshot Learning in Neural Dialogue Systems. The dataset was collected based on the Wizard of Oz experiment on Amazon MTurk. Each dialogue contains a goal label and several exchanges between a customer and the system. Each user turn was labelled by a set of slot-value pairs representing a coarse representation of dialogue state ('slu' field). There are in total 676 dialogue, in which most of the dialogues are finished but some of dialogues were not. * How to get the transformed data from original data: + Run 'python URL' in the current directory. Need '../../camrest/' as the original data. * Main changes of the transformation: + Add dialogue act annotation according to the state change. This step was done by ConvLab-2 and we use the processed dialog acts here. + Rename 'pricerange' to 'price range' + Add character level span annotation for non-categorical slots. * Annotations: + user goal, dialogue acts, state. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG, E2E, User simulator ### Languages English ### Data Splits 1 domains: ['restaurant'] * cat slot match: how many values of categorical slots are in the possible values of ontology in percentage. * non-cat slot span: how many values of non-categorical slots have span annotation in percentage. ### Licensing Information CC BY 4.0
[ "### Dataset Summary\n\n\nCambridge restaurant dialogue domain dataset collected for developing neural network based dialogue systems. The two papers published based on this dataset are: 1. A Network-based End-to-End Trainable Task-oriented Dialogue System 2. Conditional Generation and Snapshot Learning in Neural Dialogue Systems. The dataset was collected based on the Wizard of Oz experiment on Amazon MTurk. Each dialogue contains a goal label and several exchanges between a customer and the system. Each user turn was labelled by a set of slot-value pairs representing a coarse representation of dialogue state ('slu' field). There are in total 676 dialogue, in which most of the dialogues are finished but some of dialogues were not.\n\n\n* How to get the transformed data from original data:\n\t+ Run 'python URL' in the current directory. Need '../../camrest/' as the original data.\n* Main changes of the transformation:\n\t+ Add dialogue act annotation according to the state change. This step was done by ConvLab-2 and we use the processed dialog acts here.\n\t+ Rename 'pricerange' to 'price range'\n\t+ Add character level span annotation for non-categorical slots.\n* Annotations:\n\t+ user goal, dialogue acts, state.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E, User simulator", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n1 domains: ['restaurant']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY 4.0" ]
[ "TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nCambridge restaurant dialogue domain dataset collected for developing neural network based dialogue systems. The two papers published based on this dataset are: 1. A Network-based End-to-End Trainable Task-oriented Dialogue System 2. Conditional Generation and Snapshot Learning in Neural Dialogue Systems. The dataset was collected based on the Wizard of Oz experiment on Amazon MTurk. Each dialogue contains a goal label and several exchanges between a customer and the system. Each user turn was labelled by a set of slot-value pairs representing a coarse representation of dialogue state ('slu' field). There are in total 676 dialogue, in which most of the dialogues are finished but some of dialogues were not.\n\n\n* How to get the transformed data from original data:\n\t+ Run 'python URL' in the current directory. Need '../../camrest/' as the original data.\n* Main changes of the transformation:\n\t+ Add dialogue act annotation according to the state change. This step was done by ConvLab-2 and we use the processed dialog acts here.\n\t+ Rename 'pricerange' to 'price range'\n\t+ Add character level span annotation for non-categorical slots.\n* Annotations:\n\t+ user goal, dialogue acts, state.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E, User simulator", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n1 domains: ['restaurant']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY 4.0" ]
[ 47, 289, 28, 5, 61, 9 ]
[ "passage: TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-4.0 #region-us \n### Dataset Summary\n\n\nCambridge restaurant dialogue domain dataset collected for developing neural network based dialogue systems. The two papers published based on this dataset are: 1. A Network-based End-to-End Trainable Task-oriented Dialogue System 2. Conditional Generation and Snapshot Learning in Neural Dialogue Systems. The dataset was collected based on the Wizard of Oz experiment on Amazon MTurk. Each dialogue contains a goal label and several exchanges between a customer and the system. Each user turn was labelled by a set of slot-value pairs representing a coarse representation of dialogue state ('slu' field). There are in total 676 dialogue, in which most of the dialogues are finished but some of dialogues were not.\n\n\n* How to get the transformed data from original data:\n\t+ Run 'python URL' in the current directory. Need '../../camrest/' as the original data.\n* Main changes of the transformation:\n\t+ Add dialogue act annotation according to the state change. This step was done by ConvLab-2 and we use the processed dialog acts here.\n\t+ Rename 'pricerange' to 'price range'\n\t+ Add character level span annotation for non-categorical slots.\n* Annotations:\n\t+ user goal, dialogue acts, state.### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E, User simulator### Languages\n\n\nEnglish### Data Splits\n\n\n\n1 domains: ['restaurant']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.### Licensing Information\n\n\nCC BY 4.0" ]
cdc314b156e7f7ffa81a1e7398f1f8a2e86c0095
# Dataset Card for Taskmaster-2 - **Repository:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-2-2020 - **Paper:** https://arxiv.org/pdf/1909.05358.pdf - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('tm2') ontology = load_ontology('tm2') database = load_database('tm2') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary The Taskmaster-2 dataset consists of 17,289 dialogs in the seven domains. Unlike Taskmaster-1, which includes both written "self-dialogs" and spoken two-person dialogs, Taskmaster-2 consists entirely of spoken two-person dialogs. In addition, while Taskmaster-1 is almost exclusively task-based, Taskmaster-2 contains a good number of search- and recommendation-oriented dialogs, as seen for example in the restaurants, flights, hotels, and movies verticals. The music browsing and sports conversations are almost exclusively search- and recommendation-based. All dialogs in this release were created using a Wizard of Oz (WOz) methodology in which crowdsourced workers played the role of a 'user' and trained call center operators played the role of the 'assistant'. In this way, users were led to believe they were interacting with an automated system that “spoke” using text-to-speech (TTS) even though it was in fact a human behind the scenes. As a result, users could express themselves however they chose in the context of an automated interface. - **How to get the transformed data from original data:** - Download [master.zip](https://github.com/google-research-datasets/Taskmaster/archive/refs/heads/master.zip). - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - Remove dialogs that are empty or only contain one speaker. - Split each domain dialogs into train/validation/test randomly (8:1:1). - Merge continuous turns by the same speaker (ignore repeated turns). - Annotate `dialogue acts` according to the original segment annotations. Add `intent` annotation (`==inform`). The type of `dialogue act` is set to `non-categorical` if the `slot` is not in `anno2slot` in `preprocess.py`). Otherwise, the type is set to `binary` (and the `value` is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation. - Add `domain`, `intent`, and `slot` descriptions. - Add `state` by accumulate `non-categorical dialogue acts` in the order that they appear. - Keep the first annotation since each conversation was annotated by two workers. - **Annotations:** - dialogue acts, state. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 13838 | 234321 | 16.93 | 9.1 | 1 | - | - | - | 100 | | validation | 1731 | 29349 | 16.95 | 9.15 | 1 | - | - | - | 100 | | test | 1734 | 29447 | 16.98 | 9.07 | 1 | - | - | - | 100 | | all | 17303 | 293117 | 16.94 | 9.1 | 1 | - | - | - | 100 | 7 domains: ['flights', 'food-ordering', 'hotels', 'movies', 'music', 'restaurant-search', 'sports'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @inproceedings{byrne-etal-2019-taskmaster, title = {Taskmaster-1:Toward a Realistic and Diverse Dialog Dataset}, author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik}, booktitle = {2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing}, address = {Hong Kong}, year = {2019} } ``` ### Licensing Information [**CC BY 4.0**](https://creativecommons.org/licenses/by/4.0/)
ConvLab/tm2
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "arxiv:1909.05358", "region:us" ]
2022-06-28T00:47:59+00:00
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "Taskmaster-2"}
2022-11-25T09:15:50+00:00
[ "1909.05358" ]
[ "en" ]
TAGS #task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-1909.05358 #region-us
Dataset Card for Taskmaster-2 ============================= * Repository: URL * Paper: URL * Leaderboard: None * Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via: For more usage please refer to here. ### Dataset Summary The Taskmaster-2 dataset consists of 17,289 dialogs in the seven domains. Unlike Taskmaster-1, which includes both written "self-dialogs" and spoken two-person dialogs, Taskmaster-2 consists entirely of spoken two-person dialogs. In addition, while Taskmaster-1 is almost exclusively task-based, Taskmaster-2 contains a good number of search- and recommendation-oriented dialogs, as seen for example in the restaurants, flights, hotels, and movies verticals. The music browsing and sports conversations are almost exclusively search- and recommendation-based. All dialogs in this release were created using a Wizard of Oz (WOz) methodology in which crowdsourced workers played the role of a 'user' and trained call center operators played the role of the 'assistant'. In this way, users were led to believe they were interacting with an automated system that “spoke” using text-to-speech (TTS) even though it was in fact a human behind the scenes. As a result, users could express themselves however they chose in the context of an automated interface. * How to get the transformed data from original data: + Download URL. + Run 'python URL' in the current directory. * Main changes of the transformation: + Remove dialogs that are empty or only contain one speaker. + Split each domain dialogs into train/validation/test randomly (8:1:1). + Merge continuous turns by the same speaker (ignore repeated turns). + Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation ('==inform'). The type of 'dialogue act' is set to 'non-categorical' if the 'slot' is not in 'anno2slot' in 'URL'). Otherwise, the type is set to 'binary' (and the 'value' is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation. + Add 'domain', 'intent', and 'slot' descriptions. + Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear. + Keep the first annotation since each conversation was annotated by two workers. * Annotations: + dialogue acts, state. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG ### Languages English ### Data Splits 7 domains: ['flights', 'food-ordering', 'hotels', 'movies', 'music', 'restaurant-search', 'sports'] * cat slot match: how many values of categorical slots are in the possible values of ontology in percentage. * non-cat slot span: how many values of non-categorical slots have span annotation in percentage. ### Licensing Information CC BY 4.0
[ "### Dataset Summary\n\n\nThe Taskmaster-2 dataset consists of 17,289 dialogs in the seven domains. Unlike Taskmaster-1, which includes both written \"self-dialogs\" and spoken two-person dialogs, Taskmaster-2 consists entirely of spoken two-person dialogs. In addition, while Taskmaster-1 is almost exclusively task-based, Taskmaster-2 contains a good number of search- and recommendation-oriented dialogs, as seen for example in the restaurants, flights, hotels, and movies verticals. The music browsing and sports conversations are almost exclusively search- and recommendation-based. All dialogs in this release were created using a Wizard of Oz (WOz) methodology in which crowdsourced workers played the role of a 'user' and trained call center operators played the role of the 'assistant'. In this way, users were led to believe they were interacting with an automated system that “spoke” using text-to-speech (TTS) even though it was in fact a human behind the scenes. As a result, users could express themselves however they chose in the context of an automated interface.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Remove dialogs that are empty or only contain one speaker.\n\t+ Split each domain dialogs into train/validation/test randomly (8:1:1).\n\t+ Merge continuous turns by the same speaker (ignore repeated turns).\n\t+ Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation ('==inform'). The type of 'dialogue act' is set to 'non-categorical' if the 'slot' is not in 'anno2slot' in 'URL'). Otherwise, the type is set to 'binary' (and the 'value' is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.\n\t+ Add 'domain', 'intent', and 'slot' descriptions.\n\t+ Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear.\n\t+ Keep the first annotation since each conversation was annotated by two workers.\n* Annotations:\n\t+ dialogue acts, state.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n7 domains: ['flights', 'food-ordering', 'hotels', 'movies', 'music', 'restaurant-search', 'sports']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY 4.0" ]
[ "TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-1909.05358 #region-us \n", "### Dataset Summary\n\n\nThe Taskmaster-2 dataset consists of 17,289 dialogs in the seven domains. Unlike Taskmaster-1, which includes both written \"self-dialogs\" and spoken two-person dialogs, Taskmaster-2 consists entirely of spoken two-person dialogs. In addition, while Taskmaster-1 is almost exclusively task-based, Taskmaster-2 contains a good number of search- and recommendation-oriented dialogs, as seen for example in the restaurants, flights, hotels, and movies verticals. The music browsing and sports conversations are almost exclusively search- and recommendation-based. All dialogs in this release were created using a Wizard of Oz (WOz) methodology in which crowdsourced workers played the role of a 'user' and trained call center operators played the role of the 'assistant'. In this way, users were led to believe they were interacting with an automated system that “spoke” using text-to-speech (TTS) even though it was in fact a human behind the scenes. As a result, users could express themselves however they chose in the context of an automated interface.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Remove dialogs that are empty or only contain one speaker.\n\t+ Split each domain dialogs into train/validation/test randomly (8:1:1).\n\t+ Merge continuous turns by the same speaker (ignore repeated turns).\n\t+ Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation ('==inform'). The type of 'dialogue act' is set to 'non-categorical' if the 'slot' is not in 'anno2slot' in 'URL'). Otherwise, the type is set to 'binary' (and the 'value' is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.\n\t+ Add 'domain', 'intent', and 'slot' descriptions.\n\t+ Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear.\n\t+ Keep the first annotation since each conversation was annotated by two workers.\n* Annotations:\n\t+ dialogue acts, state.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n7 domains: ['flights', 'food-ordering', 'hotels', 'movies', 'music', 'restaurant-search', 'sports']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY 4.0" ]
[ 57, 549, 20, 5, 94, 9 ]
[ "passage: TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #arxiv-1909.05358 #region-us \n" ]
910584e5451e2e439bb2a07b8544ecb42ff8835b
# Dataset Card for Taskmaster-3 - **Repository:** https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020 - **Paper:** https://aclanthology.org/2021.acl-long.55.pdf - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('tm3') ontology = load_ontology('tm3') database = load_database('tm3') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary The Taskmaster-3 (aka TicketTalk) dataset consists of 23,789 movie ticketing dialogs (located in Taskmaster/TM-3-2020/data/). By "movie ticketing" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction. This collection was created using the "self-dialog" method. This means a single, crowd-sourced worker is paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent. In order to gather a wide range of conversational scenarios and linguistic phenomena, workers were given both open-ended as well as highly structured conversational tasks. In all, we used over three dozen sets of instructions while building this corpus. The "instructions" field in data.json provides the exact scenario workers were given to complete each dialog. In this way, conversations involve a wide variety of paths, from those where the customer decides on a movie based on genre, their location, current releases, or from what they already have in mind. In addition, dialogs also include error handling with repect to repair (e.g. "No, I said Tom Cruise."), clarifications (e.g. "Sorry. Did you want the AMC 16 or Century City 16?") and other common conversational hiccups. In some cases instructions are completely open ended e.g. "Pretend you are taking your friend to a movie in Salem, Oregon. Create a conversation where you end up buying two tickets after finding out what is playing in at least two local theaters. Make sure the ticket purchase includes a confirmation of the deatils by the agent before the purchase, including date, time, movie, theater, and number of tickets." In other cases we restrict the conversational content and structure by offering a partially completed conversation that the workers must finalize or fill in based a certain parameters. These partially completed dialogs are labeled "Auto template" in the "scenario" field shown for each conversation in the data.json file. In some cases, we provided a small KB from which workers would choose movies, theaters, etc. but in most cases (pre-pandemic) workers were told to use the internet to get accurate current details for their dialogs. In any case, all relevant entities are annotated. - **How to get the transformed data from original data:** - Download [master.zip](https://github.com/google-research-datasets/Taskmaster/archive/refs/heads/master.zip). - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - Remove dialogs that are empty or only contain one speaker. - Split each domain dialogs into train/validation/test randomly (8:1:1). - Merge continuous turns by the same speaker (ignore repeated turns). - Annotate `dialogue acts` according to the original segment annotations. Add `intent` annotation (`==inform`). The type of `dialogue act` is set to `non-categorical` if the `slot` is not `description.other` or `description.plot`. Otherwise, the type is set to `binary` (and the `value` is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation. - Add `domain` and `intent` descriptions. - Rename `api` to `db_results`. - Add `state` by accumulate `non-categorical dialogue acts` in the order that they appear. - **Annotations:** - dialogue acts, state, db_results. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG, E2E ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 18997 | 380646 | 20.04 | 10.48 | 1 | - | - | - | 100 | | validation | 2380 | 47531 | 19.97 | 10.38 | 1 | - | - | - | 100 | | test | 2380 | 48849 | 20.52 | 10.12 | 1 | - | - | - | 100 | | all | 23757 | 477026 | 20.08 | 10.43 | 1 | - | - | - | 100 | 1 domains: ['movie'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @inproceedings{byrne-etal-2021-tickettalk, title = "{T}icket{T}alk: Toward human-level performance with end-to-end, transaction-based dialog systems", author = "Byrne, Bill and Krishnamoorthi, Karthik and Ganesh, Saravanan and Kale, Mihir", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.55", doi = "10.18653/v1/2021.acl-long.55", pages = "671--680", } ``` ### Licensing Information [**CC BY 4.0**](https://creativecommons.org/licenses/by/4.0/)
ConvLab/tm3
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-4.0", "region:us" ]
2022-06-28T00:49:52+00:00
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "Taskmaster-3"}
2022-11-25T09:15:58+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us
Dataset Card for Taskmaster-3 ============================= * Repository: URL * Paper: URL * Leaderboard: None * Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via: For more usage please refer to here. ### Dataset Summary The Taskmaster-3 (aka TicketTalk) dataset consists of 23,789 movie ticketing dialogs (located in Taskmaster/TM-3-2020/data/). By "movie ticketing" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction. This collection was created using the "self-dialog" method. This means a single, crowd-sourced worker is paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent. In order to gather a wide range of conversational scenarios and linguistic phenomena, workers were given both open-ended as well as highly structured conversational tasks. In all, we used over three dozen sets of instructions while building this corpus. The "instructions" field in URL provides the exact scenario workers were given to complete each dialog. In this way, conversations involve a wide variety of paths, from those where the customer decides on a movie based on genre, their location, current releases, or from what they already have in mind. In addition, dialogs also include error handling with repect to repair (e.g. "No, I said Tom Cruise."), clarifications (e.g. "Sorry. Did you want the AMC 16 or Century City 16?") and other common conversational hiccups. In some cases instructions are completely open ended e.g. "Pretend you are taking your friend to a movie in Salem, Oregon. Create a conversation where you end up buying two tickets after finding out what is playing in at least two local theaters. Make sure the ticket purchase includes a confirmation of the deatils by the agent before the purchase, including date, time, movie, theater, and number of tickets." In other cases we restrict the conversational content and structure by offering a partially completed conversation that the workers must finalize or fill in based a certain parameters. These partially completed dialogs are labeled "Auto template" in the "scenario" field shown for each conversation in the URL file. In some cases, we provided a small KB from which workers would choose movies, theaters, etc. but in most cases (pre-pandemic) workers were told to use the internet to get accurate current details for their dialogs. In any case, all relevant entities are annotated. * How to get the transformed data from original data: + Download URL. + Run 'python URL' in the current directory. * Main changes of the transformation: + Remove dialogs that are empty or only contain one speaker. + Split each domain dialogs into train/validation/test randomly (8:1:1). + Merge continuous turns by the same speaker (ignore repeated turns). + Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation ('==inform'). The type of 'dialogue act' is set to 'non-categorical' if the 'slot' is not 'URL' or 'URL'. Otherwise, the type is set to 'binary' (and the 'value' is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation. + Add 'domain' and 'intent' descriptions. + Rename 'api' to 'db\_results'. + Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear. * Annotations: + dialogue acts, state, db\_results. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG, E2E ### Languages English ### Data Splits 1 domains: ['movie'] * cat slot match: how many values of categorical slots are in the possible values of ontology in percentage. * non-cat slot span: how many values of non-categorical slots have span annotation in percentage. ### Licensing Information CC BY 4.0
[ "### Dataset Summary\n\n\nThe Taskmaster-3 (aka TicketTalk) dataset consists of 23,789 movie ticketing dialogs (located in Taskmaster/TM-3-2020/data/). By \"movie ticketing\" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction.\n\n\nThis collection was created using the \"self-dialog\" method. This means a single, crowd-sourced worker is paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent. In order to gather a wide range of conversational scenarios and linguistic phenomena, workers were given both open-ended as well as highly structured conversational tasks. In all, we used over three dozen sets of instructions while building this corpus. The \"instructions\" field in URL provides the exact scenario workers were given to complete each dialog. In this way, conversations involve a wide variety of paths, from those where the customer decides on a movie based on genre, their location, current releases, or from what they already have in mind. In addition, dialogs also include error handling with repect to repair (e.g. \"No, I said Tom Cruise.\"), clarifications (e.g. \"Sorry. Did you want the AMC 16 or Century City 16?\") and other common conversational hiccups. In some cases instructions are completely open ended e.g. \"Pretend you are taking your friend to a movie in Salem, Oregon. Create a conversation where you end up buying two tickets after finding out what is playing in at least two local theaters. Make sure the ticket purchase includes a confirmation of the deatils by the agent before the purchase, including date, time, movie, theater, and number of tickets.\" In other cases we restrict the conversational content and structure by offering a partially completed conversation that the workers must finalize or fill in based a certain parameters. These partially completed dialogs are labeled \"Auto template\" in the \"scenario\" field shown for each conversation in the URL file. In some cases, we provided a small KB from which workers would choose movies, theaters, etc. but in most cases (pre-pandemic) workers were told to use the internet to get accurate current details for their dialogs. In any case, all relevant entities are annotated.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Remove dialogs that are empty or only contain one speaker.\n\t+ Split each domain dialogs into train/validation/test randomly (8:1:1).\n\t+ Merge continuous turns by the same speaker (ignore repeated turns).\n\t+ Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation ('==inform'). The type of 'dialogue act' is set to 'non-categorical' if the 'slot' is not 'URL' or 'URL'. Otherwise, the type is set to 'binary' (and the 'value' is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.\n\t+ Add 'domain' and 'intent' descriptions.\n\t+ Rename 'api' to 'db\\_results'.\n\t+ Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear.\n* Annotations:\n\t+ dialogue acts, state, db\\_results.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n1 domains: ['movie']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY 4.0" ]
[ "TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nThe Taskmaster-3 (aka TicketTalk) dataset consists of 23,789 movie ticketing dialogs (located in Taskmaster/TM-3-2020/data/). By \"movie ticketing\" we mean conversations where the customer's goal is to purchase tickets after deciding on theater, time, movie name, number of tickets, and date, or opt out of the transaction.\n\n\nThis collection was created using the \"self-dialog\" method. This means a single, crowd-sourced worker is paid to create a conversation writing turns for both speakers, i.e. the customer and the ticketing agent. In order to gather a wide range of conversational scenarios and linguistic phenomena, workers were given both open-ended as well as highly structured conversational tasks. In all, we used over three dozen sets of instructions while building this corpus. The \"instructions\" field in URL provides the exact scenario workers were given to complete each dialog. In this way, conversations involve a wide variety of paths, from those where the customer decides on a movie based on genre, their location, current releases, or from what they already have in mind. In addition, dialogs also include error handling with repect to repair (e.g. \"No, I said Tom Cruise.\"), clarifications (e.g. \"Sorry. Did you want the AMC 16 or Century City 16?\") and other common conversational hiccups. In some cases instructions are completely open ended e.g. \"Pretend you are taking your friend to a movie in Salem, Oregon. Create a conversation where you end up buying two tickets after finding out what is playing in at least two local theaters. Make sure the ticket purchase includes a confirmation of the deatils by the agent before the purchase, including date, time, movie, theater, and number of tickets.\" In other cases we restrict the conversational content and structure by offering a partially completed conversation that the workers must finalize or fill in based a certain parameters. These partially completed dialogs are labeled \"Auto template\" in the \"scenario\" field shown for each conversation in the URL file. In some cases, we provided a small KB from which workers would choose movies, theaters, etc. but in most cases (pre-pandemic) workers were told to use the internet to get accurate current details for their dialogs. In any case, all relevant entities are annotated.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Remove dialogs that are empty or only contain one speaker.\n\t+ Split each domain dialogs into train/validation/test randomly (8:1:1).\n\t+ Merge continuous turns by the same speaker (ignore repeated turns).\n\t+ Annotate 'dialogue acts' according to the original segment annotations. Add 'intent' annotation ('==inform'). The type of 'dialogue act' is set to 'non-categorical' if the 'slot' is not 'URL' or 'URL'. Otherwise, the type is set to 'binary' (and the 'value' is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.\n\t+ Add 'domain' and 'intent' descriptions.\n\t+ Rename 'api' to 'db\\_results'.\n\t+ Add 'state' by accumulate 'non-categorical dialogue acts' in the order that they appear.\n* Annotations:\n\t+ dialogue acts, state, db\\_results.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n1 domains: ['movie']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY 4.0" ]
[ 49, 821, 24, 5, 61, 9 ]
[ "passage: TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-4.0 #region-us \n" ]
e84ee0c54d5df7e030819011b0a101f738211b9a
# Dataset Card for MetaLWOZ - **Repository:** https://www.microsoft.com/en-us/research/project/metalwoz/ - **Paper:** https://www.microsoft.com/en-us/research/publication/results-of-the-multi-domain-task-completion-dialog-challenge/ - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('metalwoz') ontology = load_ontology('metalwoz') database = load_database('metalwoz') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary This large dataset was created by crowdsourcing 37,884 goal-oriented dialogs, covering 227 tasks in 47 domains. Domains include bus schedules, apartment search, alarm setting, banking, and event reservation. Each dialog was grounded in a scenario with roles, pairing a person acting as the bot and a person acting as the user. (This is the Wizard of Oz reference—using people behind the curtain who act as the machine). Each pair were given a domain and a task, and instructed to converse for 10 turns to satisfy the user’s queries. For example, if a user asked if a bus stop was operational, the bot would respond that the bus stop had been moved two blocks north, which starts a conversation that addresses the user’s actual need. - **How to get the transformed data from original data:** - Download [metalwoz-v1.zip](https://www.microsoft.com/en-us/download/58389) and [metalwoz-test-v1.zip](https://www.microsoft.com/en-us/download/100639). - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - `CITI_INFO`, `HOME_BOT`, `NAME_SUGGESTER`, and `TIME_ZONE` are randomly selected as the valiation domains. - Remove the first utterance by the system since it is "Hello how may I help you?" in most case. - Add goal description according to the original task description: user_role+user_prompt+system_role+system_prompt. - **Annotations:** - domain, goal ### Supported Tasks and Leaderboards RG, User simulator ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 34261 | 357092 | 10.42 | 7.48 | 1 | - | - | - | - | | validation | 3623 | 37060 | 10.23 | 6.59 | 1 | - | - | - | - | | test | 2319 | 23882 | 10.3 | 7.96 | 1 | - | - | - | - | | all | 40203 | 418034 | 10.4 | 7.43 | 1 | - | - | - | - | 51 domains: ['AGREEMENT_BOT', 'ALARM_SET', 'APARTMENT_FINDER', 'APPOINTMENT_REMINDER', 'AUTO_SORT', 'BANK_BOT', 'BUS_SCHEDULE_BOT', 'CATALOGUE_BOT', 'CHECK_STATUS', 'CITY_INFO', 'CONTACT_MANAGER', 'DECIDER_BOT', 'EDIT_PLAYLIST', 'EVENT_RESERVE', 'GAME_RULES', 'GEOGRAPHY', 'GUINESS_CHECK', 'HOME_BOT', 'HOW_TO_BASIC', 'INSURANCE', 'LIBRARY_REQUEST', 'LOOK_UP_INFO', 'MAKE_RESTAURANT_RESERVATIONS', 'MOVIE_LISTINGS', 'MUSIC_SUGGESTER', 'NAME_SUGGESTER', 'ORDER_PIZZA', 'PET_ADVICE', 'PHONE_PLAN_BOT', 'PHONE_SETTINGS', 'PLAY_TIMES', 'POLICY_BOT', 'PRESENT_IDEAS', 'PROMPT_GENERATOR', 'QUOTE_OF_THE_DAY_BOT', 'RESTAURANT_PICKER', 'SCAM_LOOKUP', 'SHOPPING', 'SKI_BOT', 'SPORTS_INFO', 'STORE_DETAILS', 'TIME_ZONE', 'UPDATE_CALENDAR', 'UPDATE_CONTACT', 'WEATHER_CHECK', 'WEDDING_PLANNER', 'WHAT_IS_IT', 'BOOKING_FLIGHT', 'HOTEL_RESERVE', 'TOURISM', 'VACATION_IDEAS'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @inproceedings{li2020results, author = {Li, Jinchao and Peng, Baolin and Lee, Sungjin and Gao, Jianfeng and Takanobu, Ryuichi and Zhu, Qi and Minlie Huang and Schulz, Hannes and Atkinson, Adam and Adada, Mahmoud}, title = {Results of the Multi-Domain Task-Completion Dialog Challenge}, booktitle = {Proceedings of the 34th AAAI Conference on Artificial Intelligence, Eighth Dialog System Technology Challenge Workshop}, year = {2020}, month = {February}, url = {https://www.microsoft.com/en-us/research/publication/results-of-the-multi-domain-task-completion-dialog-challenge/}, } ``` ### Licensing Information [Microsoft Research Data License Agreement](https://msropendata-web-api.azurewebsites.net/licenses/2f933be3-284d-500b-7ea3-2aa2fd0f1bb2/view)
ConvLab/metalwoz
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "region:us" ]
2022-06-28T00:51:55+00:00
{"language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "MetaLWOZ"}
2022-11-25T09:11:36+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #region-us
Dataset Card for MetaLWOZ ========================= * Repository: URL * Paper: URL * Leaderboard: None * Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via: For more usage please refer to here. ### Dataset Summary This large dataset was created by crowdsourcing 37,884 goal-oriented dialogs, covering 227 tasks in 47 domains. Domains include bus schedules, apartment search, alarm setting, banking, and event reservation. Each dialog was grounded in a scenario with roles, pairing a person acting as the bot and a person acting as the user. (This is the Wizard of Oz reference—using people behind the curtain who act as the machine). Each pair were given a domain and a task, and instructed to converse for 10 turns to satisfy the user’s queries. For example, if a user asked if a bus stop was operational, the bot would respond that the bus stop had been moved two blocks north, which starts a conversation that addresses the user’s actual need. * How to get the transformed data from original data: + Download URL and URL. + Run 'python URL' in the current directory. * Main changes of the transformation: + 'CITI\_INFO', 'HOME\_BOT', 'NAME\_SUGGESTER', and 'TIME\_ZONE' are randomly selected as the valiation domains. + Remove the first utterance by the system since it is "Hello how may I help you?" in most case. + Add goal description according to the original task description: user\_role+user\_prompt+system\_role+system\_prompt. * Annotations: + domain, goal ### Supported Tasks and Leaderboards RG, User simulator ### Languages English ### Data Splits 51 domains: ['AGREEMENT\_BOT', 'ALARM\_SET', 'APARTMENT\_FINDER', 'APPOINTMENT\_REMINDER', 'AUTO\_SORT', 'BANK\_BOT', 'BUS\_SCHEDULE\_BOT', 'CATALOGUE\_BOT', 'CHECK\_STATUS', 'CITY\_INFO', 'CONTACT\_MANAGER', 'DECIDER\_BOT', 'EDIT\_PLAYLIST', 'EVENT\_RESERVE', 'GAME\_RULES', 'GEOGRAPHY', 'GUINESS\_CHECK', 'HOME\_BOT', 'HOW\_TO\_BASIC', 'INSURANCE', 'LIBRARY\_REQUEST', 'LOOK\_UP\_INFO', 'MAKE\_RESTAURANT\_RESERVATIONS', 'MOVIE\_LISTINGS', 'MUSIC\_SUGGESTER', 'NAME\_SUGGESTER', 'ORDER\_PIZZA', 'PET\_ADVICE', 'PHONE\_PLAN\_BOT', 'PHONE\_SETTINGS', 'PLAY\_TIMES', 'POLICY\_BOT', 'PRESENT\_IDEAS', 'PROMPT\_GENERATOR', 'QUOTE\_OF\_THE\_DAY\_BOT', 'RESTAURANT\_PICKER', 'SCAM\_LOOKUP', 'SHOPPING', 'SKI\_BOT', 'SPORTS\_INFO', 'STORE\_DETAILS', 'TIME\_ZONE', 'UPDATE\_CALENDAR', 'UPDATE\_CONTACT', 'WEATHER\_CHECK', 'WEDDING\_PLANNER', 'WHAT\_IS\_IT', 'BOOKING\_FLIGHT', 'HOTEL\_RESERVE', 'TOURISM', 'VACATION\_IDEAS'] * cat slot match: how many values of categorical slots are in the possible values of ontology in percentage. * non-cat slot span: how many values of non-categorical slots have span annotation in percentage. ### Licensing Information Microsoft Research Data License Agreement
[ "### Dataset Summary\n\n\nThis large dataset was created by crowdsourcing 37,884 goal-oriented dialogs, covering 227 tasks in 47 domains. Domains include bus schedules, apartment search, alarm setting, banking, and event reservation. Each dialog was grounded in a scenario with roles, pairing a person acting as the bot and a person acting as the user. (This is the Wizard of Oz reference—using people behind the curtain who act as the machine). Each pair were given a domain and a task, and instructed to converse for 10 turns to satisfy the user’s queries. For example, if a user asked if a bus stop was operational, the bot would respond that the bus stop had been moved two blocks north, which starts a conversation that addresses the user’s actual need.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL and URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ 'CITI\\_INFO', 'HOME\\_BOT', 'NAME\\_SUGGESTER', and 'TIME\\_ZONE' are randomly selected as the valiation domains.\n\t+ Remove the first utterance by the system since it is \"Hello how may I help you?\" in most case.\n\t+ Add goal description according to the original task description: user\\_role+user\\_prompt+system\\_role+system\\_prompt.\n* Annotations:\n\t+ domain, goal", "### Supported Tasks and Leaderboards\n\n\nRG, User simulator", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n51 domains: ['AGREEMENT\\_BOT', 'ALARM\\_SET', 'APARTMENT\\_FINDER', 'APPOINTMENT\\_REMINDER', 'AUTO\\_SORT', 'BANK\\_BOT', 'BUS\\_SCHEDULE\\_BOT', 'CATALOGUE\\_BOT', 'CHECK\\_STATUS', 'CITY\\_INFO', 'CONTACT\\_MANAGER', 'DECIDER\\_BOT', 'EDIT\\_PLAYLIST', 'EVENT\\_RESERVE', 'GAME\\_RULES', 'GEOGRAPHY', 'GUINESS\\_CHECK', 'HOME\\_BOT', 'HOW\\_TO\\_BASIC', 'INSURANCE', 'LIBRARY\\_REQUEST', 'LOOK\\_UP\\_INFO', 'MAKE\\_RESTAURANT\\_RESERVATIONS', 'MOVIE\\_LISTINGS', 'MUSIC\\_SUGGESTER', 'NAME\\_SUGGESTER', 'ORDER\\_PIZZA', 'PET\\_ADVICE', 'PHONE\\_PLAN\\_BOT', 'PHONE\\_SETTINGS', 'PLAY\\_TIMES', 'POLICY\\_BOT', 'PRESENT\\_IDEAS', 'PROMPT\\_GENERATOR', 'QUOTE\\_OF\\_THE\\_DAY\\_BOT', 'RESTAURANT\\_PICKER', 'SCAM\\_LOOKUP', 'SHOPPING', 'SKI\\_BOT', 'SPORTS\\_INFO', 'STORE\\_DETAILS', 'TIME\\_ZONE', 'UPDATE\\_CALENDAR', 'UPDATE\\_CONTACT', 'WEATHER\\_CHECK', 'WEDDING\\_PLANNER', 'WHAT\\_IS\\_IT', 'BOOKING\\_FLIGHT', 'HOTEL\\_RESERVE', 'TOURISM', 'VACATION\\_IDEAS']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nMicrosoft Research Data License Agreement" ]
[ "TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #region-us \n", "### Dataset Summary\n\n\nThis large dataset was created by crowdsourcing 37,884 goal-oriented dialogs, covering 227 tasks in 47 domains. Domains include bus schedules, apartment search, alarm setting, banking, and event reservation. Each dialog was grounded in a scenario with roles, pairing a person acting as the bot and a person acting as the user. (This is the Wizard of Oz reference—using people behind the curtain who act as the machine). Each pair were given a domain and a task, and instructed to converse for 10 turns to satisfy the user’s queries. For example, if a user asked if a bus stop was operational, the bot would respond that the bus stop had been moved two blocks north, which starts a conversation that addresses the user’s actual need.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL and URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ 'CITI\\_INFO', 'HOME\\_BOT', 'NAME\\_SUGGESTER', and 'TIME\\_ZONE' are randomly selected as the valiation domains.\n\t+ Remove the first utterance by the system since it is \"Hello how may I help you?\" in most case.\n\t+ Add goal description according to the original task description: user\\_role+user\\_prompt+system\\_role+system\\_prompt.\n* Annotations:\n\t+ domain, goal", "### Supported Tasks and Leaderboards\n\n\nRG, User simulator", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n51 domains: ['AGREEMENT\\_BOT', 'ALARM\\_SET', 'APARTMENT\\_FINDER', 'APPOINTMENT\\_REMINDER', 'AUTO\\_SORT', 'BANK\\_BOT', 'BUS\\_SCHEDULE\\_BOT', 'CATALOGUE\\_BOT', 'CHECK\\_STATUS', 'CITY\\_INFO', 'CONTACT\\_MANAGER', 'DECIDER\\_BOT', 'EDIT\\_PLAYLIST', 'EVENT\\_RESERVE', 'GAME\\_RULES', 'GEOGRAPHY', 'GUINESS\\_CHECK', 'HOME\\_BOT', 'HOW\\_TO\\_BASIC', 'INSURANCE', 'LIBRARY\\_REQUEST', 'LOOK\\_UP\\_INFO', 'MAKE\\_RESTAURANT\\_RESERVATIONS', 'MOVIE\\_LISTINGS', 'MUSIC\\_SUGGESTER', 'NAME\\_SUGGESTER', 'ORDER\\_PIZZA', 'PET\\_ADVICE', 'PHONE\\_PLAN\\_BOT', 'PHONE\\_SETTINGS', 'PLAY\\_TIMES', 'POLICY\\_BOT', 'PRESENT\\_IDEAS', 'PROMPT\\_GENERATOR', 'QUOTE\\_OF\\_THE\\_DAY\\_BOT', 'RESTAURANT\\_PICKER', 'SCAM\\_LOOKUP', 'SHOPPING', 'SKI\\_BOT', 'SPORTS\\_INFO', 'STORE\\_DETAILS', 'TIME\\_ZONE', 'UPDATE\\_CALENDAR', 'UPDATE\\_CONTACT', 'WEATHER\\_CHECK', 'WEDDING\\_PLANNER', 'WHAT\\_IS\\_IT', 'BOOKING\\_FLIGHT', 'HOTEL\\_RESERVE', 'TOURISM', 'VACATION\\_IDEAS']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nMicrosoft Research Data License Agreement" ]
[ 40, 340, 16, 5, 545, 11 ]
[ "passage: TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #region-us \n### Dataset Summary\n\n\nThis large dataset was created by crowdsourcing 37,884 goal-oriented dialogs, covering 227 tasks in 47 domains. Domains include bus schedules, apartment search, alarm setting, banking, and event reservation. Each dialog was grounded in a scenario with roles, pairing a person acting as the bot and a person acting as the user. (This is the Wizard of Oz reference—using people behind the curtain who act as the machine). Each pair were given a domain and a task, and instructed to converse for 10 turns to satisfy the user’s queries. For example, if a user asked if a bus stop was operational, the bot would respond that the bus stop had been moved two blocks north, which starts a conversation that addresses the user’s actual need.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL and URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ 'CITI\\_INFO', 'HOME\\_BOT', 'NAME\\_SUGGESTER', and 'TIME\\_ZONE' are randomly selected as the valiation domains.\n\t+ Remove the first utterance by the system since it is \"Hello how may I help you?\" in most case.\n\t+ Add goal description according to the original task description: user\\_role+user\\_prompt+system\\_role+system\\_prompt.\n* Annotations:\n\t+ domain, goal### Supported Tasks and Leaderboards\n\n\nRG, User simulator### Languages\n\n\nEnglish" ]
6e8c79b888b21cc658cf9c0ce128d263241cf70f
# Dataset Card for Schema-Guided Dialogue - **Repository:** https://github.com/google-research-datasets/dstc8-schema-guided-dialogue - **Paper:** https://arxiv.org/pdf/1909.05855.pdf - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('sgd') ontology = load_ontology('sgd') database = load_database('sgd') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary The **Schema-Guided Dialogue (SGD)** dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, such as banks, events, media, calendar, travel, and weather. For most of these domains, the dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios. The wide range of available annotations can be used for intent prediction, slot filling, dialogue state tracking, policy imitation learning, language generation, and user simulation learning, among other tasks for developing large-scale virtual assistants. Additionally, the dataset contains unseen domains and services in the evaluation set to quantify the performance in zero-shot or few-shot settings. - **How to get the transformed data from original data:** - Download [dstc8-schema-guided-dialogue-master.zip](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue/archive/refs/heads/master.zip). - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - Lower case original `act` as `intent`. - Add `count` slot for each domain, non-categorical, find span by text matching. - Categorize `dialogue acts` according to the `intent`. - Concatenate multiple values using `|`. - Retain `active_intent`, `requested_slots`, `service_call`. - **Annotations:** - dialogue acts, state, db_results, service_call, active_intent, requested_slots. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG, E2E ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | | ---------- | --------- | ---------- | ------- | ---------- | ----------- | --------------------- | -------------------- | ---------------------------- | ------------------------------- | | train | 16142 | 329964 | 20.44 | 9.75 | 1.84 | 100 | - | 100 | 100 | | validation | 2482 | 48726 | 19.63 | 9.66 | 1.84 | 100 | - | 100 | 100 | | test | 4201 | 84594 | 20.14 | 10.4 | 2.02 | 100 | - | 100 | 100 | | all | 22825 | 463284 | 20.3 | 9.86 | 1.87 | 100 | - | 100 | 100 | 45 domains: ['Banks_1', 'Buses_1', 'Buses_2', 'Calendar_1', 'Events_1', 'Events_2', 'Flights_1', 'Flights_2', 'Homes_1', 'Hotels_1', 'Hotels_2', 'Hotels_3', 'Media_1', 'Movies_1', 'Music_1', 'Music_2', 'RentalCars_1', 'RentalCars_2', 'Restaurants_1', 'RideSharing_1', 'RideSharing_2', 'Services_1', 'Services_2', 'Services_3', 'Travel_1', 'Weather_1', 'Alarm_1', 'Banks_2', 'Flights_3', 'Hotels_4', 'Media_2', 'Movies_2', 'Restaurants_2', 'Services_4', 'Buses_3', 'Events_3', 'Flights_4', 'Homes_2', 'Media_3', 'Messaging_1', 'Movies_3', 'Music_3', 'Payment_1', 'RentalCars_3', 'Trains_1'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @article{rastogi2019towards, title={Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset}, author={Rastogi, Abhinav and Zang, Xiaoxue and Sunkara, Srinivas and Gupta, Raghav and Khaitan, Pranav}, journal={arXiv preprint arXiv:1909.05855}, year={2019} } ``` ### Licensing Information [**CC BY-SA 4.0**](https://creativecommons.org/licenses/by-sa/4.0/)
ConvLab/sgd
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-sa-4.0", "arxiv:1909.05855", "region:us" ]
2022-06-28T00:54:08+00:00
{"language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "SGD"}
2022-11-25T08:55:38+00:00
[ "1909.05855" ]
[ "en" ]
TAGS #task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-sa-4.0 #arxiv-1909.05855 #region-us
Dataset Card for Schema-Guided Dialogue ======================================= * Repository: URL * Paper: URL * Leaderboard: None * Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via: For more usage please refer to here. ### Dataset Summary The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, such as banks, events, media, calendar, travel, and weather. For most of these domains, the dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios. The wide range of available annotations can be used for intent prediction, slot filling, dialogue state tracking, policy imitation learning, language generation, and user simulation learning, among other tasks for developing large-scale virtual assistants. Additionally, the dataset contains unseen domains and services in the evaluation set to quantify the performance in zero-shot or few-shot settings. * How to get the transformed data from original data: + Download URL. + Run 'python URL' in the current directory. * Main changes of the transformation: + Lower case original 'act' as 'intent'. + Add 'count' slot for each domain, non-categorical, find span by text matching. + Categorize 'dialogue acts' according to the 'intent'. + Concatenate multiple values using '|'. + Retain 'active\_intent', 'requested\_slots', 'service\_call'. * Annotations: + dialogue acts, state, db\_results, service\_call, active\_intent, requested\_slots. ### Supported Tasks and Leaderboards NLU, DST, Policy, NLG, E2E ### Languages English ### Data Splits 45 domains: ['Banks\_1', 'Buses\_1', 'Buses\_2', 'Calendar\_1', 'Events\_1', 'Events\_2', 'Flights\_1', 'Flights\_2', 'Homes\_1', 'Hotels\_1', 'Hotels\_2', 'Hotels\_3', 'Media\_1', 'Movies\_1', 'Music\_1', 'Music\_2', 'RentalCars\_1', 'RentalCars\_2', 'Restaurants\_1', 'RideSharing\_1', 'RideSharing\_2', 'Services\_1', 'Services\_2', 'Services\_3', 'Travel\_1', 'Weather\_1', 'Alarm\_1', 'Banks\_2', 'Flights\_3', 'Hotels\_4', 'Media\_2', 'Movies\_2', 'Restaurants\_2', 'Services\_4', 'Buses\_3', 'Events\_3', 'Flights\_4', 'Homes\_2', 'Media\_3', 'Messaging\_1', 'Movies\_3', 'Music\_3', 'Payment\_1', 'RentalCars\_3', 'Trains\_1'] * cat slot match: how many values of categorical slots are in the possible values of ontology in percentage. * non-cat slot span: how many values of non-categorical slots have span annotation in percentage. ### Licensing Information CC BY-SA 4.0
[ "### Dataset Summary\n\n\nThe Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, such as banks, events, media, calendar, travel, and weather. For most of these domains, the dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios. The wide range of available annotations can be used for intent prediction, slot filling, dialogue state tracking, policy imitation learning, language generation, and user simulation learning, among other tasks for developing large-scale virtual assistants. Additionally, the dataset contains unseen domains and services in the evaluation set to quantify the performance in zero-shot or few-shot settings.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Lower case original 'act' as 'intent'.\n\t+ Add 'count' slot for each domain, non-categorical, find span by text matching.\n\t+ Categorize 'dialogue acts' according to the 'intent'.\n\t+ Concatenate multiple values using '|'.\n\t+ Retain 'active\\_intent', 'requested\\_slots', 'service\\_call'.\n* Annotations:\n\t+ dialogue acts, state, db\\_results, service\\_call, active\\_intent, requested\\_slots.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n45 domains: ['Banks\\_1', 'Buses\\_1', 'Buses\\_2', 'Calendar\\_1', 'Events\\_1', 'Events\\_2', 'Flights\\_1', 'Flights\\_2', 'Homes\\_1', 'Hotels\\_1', 'Hotels\\_2', 'Hotels\\_3', 'Media\\_1', 'Movies\\_1', 'Music\\_1', 'Music\\_2', 'RentalCars\\_1', 'RentalCars\\_2', 'Restaurants\\_1', 'RideSharing\\_1', 'RideSharing\\_2', 'Services\\_1', 'Services\\_2', 'Services\\_3', 'Travel\\_1', 'Weather\\_1', 'Alarm\\_1', 'Banks\\_2', 'Flights\\_3', 'Hotels\\_4', 'Media\\_2', 'Movies\\_2', 'Restaurants\\_2', 'Services\\_4', 'Buses\\_3', 'Events\\_3', 'Flights\\_4', 'Homes\\_2', 'Media\\_3', 'Messaging\\_1', 'Movies\\_3', 'Music\\_3', 'Payment\\_1', 'RentalCars\\_3', 'Trains\\_1']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY-SA 4.0" ]
[ "TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-sa-4.0 #arxiv-1909.05855 #region-us \n", "### Dataset Summary\n\n\nThe Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, such as banks, events, media, calendar, travel, and weather. For most of these domains, the dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios. The wide range of available annotations can be used for intent prediction, slot filling, dialogue state tracking, policy imitation learning, language generation, and user simulation learning, among other tasks for developing large-scale virtual assistants. Additionally, the dataset contains unseen domains and services in the evaluation set to quantify the performance in zero-shot or few-shot settings.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Lower case original 'act' as 'intent'.\n\t+ Add 'count' slot for each domain, non-categorical, find span by text matching.\n\t+ Categorize 'dialogue acts' according to the 'intent'.\n\t+ Concatenate multiple values using '|'.\n\t+ Retain 'active\\_intent', 'requested\\_slots', 'service\\_call'.\n* Annotations:\n\t+ dialogue acts, state, db\\_results, service\\_call, active\\_intent, requested\\_slots.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n45 domains: ['Banks\\_1', 'Buses\\_1', 'Buses\\_2', 'Calendar\\_1', 'Events\\_1', 'Events\\_2', 'Flights\\_1', 'Flights\\_2', 'Homes\\_1', 'Hotels\\_1', 'Hotels\\_2', 'Hotels\\_3', 'Media\\_1', 'Movies\\_1', 'Music\\_1', 'Music\\_2', 'RentalCars\\_1', 'RentalCars\\_2', 'Restaurants\\_1', 'RideSharing\\_1', 'RideSharing\\_2', 'Services\\_1', 'Services\\_2', 'Services\\_3', 'Travel\\_1', 'Weather\\_1', 'Alarm\\_1', 'Banks\\_2', 'Flights\\_3', 'Hotels\\_4', 'Media\\_2', 'Movies\\_2', 'Restaurants\\_2', 'Services\\_4', 'Buses\\_3', 'Events\\_3', 'Flights\\_4', 'Homes\\_2', 'Media\\_3', 'Messaging\\_1', 'Movies\\_3', 'Music\\_3', 'Payment\\_1', 'RentalCars\\_3', 'Trains\\_1']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY-SA 4.0" ]
[ 59, 382, 24, 5, 411, 11 ]
[ "passage: TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-sa-4.0 #arxiv-1909.05855 #region-us \n### Dataset Summary\n\n\nThe Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, such as banks, events, media, calendar, travel, and weather. For most of these domains, the dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios. The wide range of available annotations can be used for intent prediction, slot filling, dialogue state tracking, policy imitation learning, language generation, and user simulation learning, among other tasks for developing large-scale virtual assistants. Additionally, the dataset contains unseen domains and services in the evaluation set to quantify the performance in zero-shot or few-shot settings.\n\n\n* How to get the transformed data from original data:\n\t+ Download URL.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Lower case original 'act' as 'intent'.\n\t+ Add 'count' slot for each domain, non-categorical, find span by text matching.\n\t+ Categorize 'dialogue acts' according to the 'intent'.\n\t+ Concatenate multiple values using '|'.\n\t+ Retain 'active\\_intent', 'requested\\_slots', 'service\\_call'.\n* Annotations:\n\t+ dialogue acts, state, db\\_results, service\\_call, active\\_intent, requested\\_slots.### Supported Tasks and Leaderboards\n\n\nNLU, DST, Policy, NLG, E2E### Languages\n\n\nEnglish" ]
2e3934c4112d51cc872379244dd746ff3e68f8f3
# Dataset Card for KVRET - **Repository:** https://nlp.stanford.edu/blog/a-new-multi-turn-multi-domain-task-oriented-dialogue-dataset/ - **Paper:** https://arxiv.org/pdf/1705.05414.pdf - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('kvret') ontology = load_ontology('kvret') database = load_database('kvret') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary In an effort to help alleviate this problem, we release a corpus of 3,031 multi-turn dialogues in three distinct domains appropriate for an in-car assistant: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our dialogues are grounded through knowledge bases ensuring that they are versatile in their natural language without being completely free form. - **How to get the transformed data from original data:** - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - Create user `dialogue acts` and `state` according to original annotation. - Put dialogue level kb into system side `db_results`. - Skip repeated turns and empty dialogue. - **Annotations:** - user dialogue acts, state, db_results. ### Supported Tasks and Leaderboards NLU, DST, Context-to-response ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 2424 | 12720 | 5.25 | 8.02 | 1 | - | - | - | 98.07 | | validation | 302 | 1566 | 5.19 | 7.93 | 1 | - | - | - | 97.62 | | test | 304 | 1627 | 5.35 | 7.7 | 1 | - | - | - | 97.72 | | all | 3030 | 15913 | 5.25 | 7.98 | 1 | - | - | - | 97.99 | 3 domains: ['schedule', 'weather', 'navigate'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @inproceedings{eric-etal-2017-key, title = "Key-Value Retrieval Networks for Task-Oriented Dialogue", author = "Eric, Mihail and Krishnan, Lakshmi and Charette, Francois and Manning, Christopher D.", booktitle = "Proceedings of the 18th Annual {SIG}dial Meeting on Discourse and Dialogue", year = "2017", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W17-5506", } ``` ### Licensing Information TODO
ConvLab/kvret
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "arxiv:1705.05414", "region:us" ]
2022-06-28T00:57:08+00:00
{"language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["conversational"], "pretty_name": "KVRET"}
2022-11-25T09:09:44+00:00
[ "1705.05414" ]
[ "en" ]
TAGS #task_categories-conversational #multilinguality-monolingual #size_categories-1K<n<10K #language-English #arxiv-1705.05414 #region-us
Dataset Card for KVRET ====================== * Repository: URL * Paper: URL * Leaderboard: None * Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via: For more usage please refer to here. ### Dataset Summary In an effort to help alleviate this problem, we release a corpus of 3,031 multi-turn dialogues in three distinct domains appropriate for an in-car assistant: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our dialogues are grounded through knowledge bases ensuring that they are versatile in their natural language without being completely free form. * How to get the transformed data from original data: + Run 'python URL' in the current directory. * Main changes of the transformation: + Create user 'dialogue acts' and 'state' according to original annotation. + Put dialogue level kb into system side 'db\_results'. + Skip repeated turns and empty dialogue. * Annotations: + user dialogue acts, state, db\_results. ### Supported Tasks and Leaderboards NLU, DST, Context-to-response ### Languages English ### Data Splits 3 domains: ['schedule', 'weather', 'navigate'] * cat slot match: how many values of categorical slots are in the possible values of ontology in percentage. * non-cat slot span: how many values of non-categorical slots have span annotation in percentage. ### Licensing Information TODO
[ "### Dataset Summary\n\n\nIn an effort to help alleviate this problem, we release a corpus of 3,031 multi-turn dialogues in three distinct domains appropriate for an in-car assistant: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our dialogues are grounded through knowledge bases ensuring that they are versatile in their natural language without being completely free form.\n\n\n* How to get the transformed data from original data:\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Create user 'dialogue acts' and 'state' according to original annotation.\n\t+ Put dialogue level kb into system side 'db\\_results'.\n\t+ Skip repeated turns and empty dialogue.\n* Annotations:\n\t+ user dialogue acts, state, db\\_results.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Context-to-response", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n3 domains: ['schedule', 'weather', 'navigate']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nTODO" ]
[ "TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-1K<n<10K #language-English #arxiv-1705.05414 #region-us \n", "### Dataset Summary\n\n\nIn an effort to help alleviate this problem, we release a corpus of 3,031 multi-turn dialogues in three distinct domains appropriate for an in-car assistant: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our dialogues are grounded through knowledge bases ensuring that they are versatile in their natural language without being completely free form.\n\n\n* How to get the transformed data from original data:\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Create user 'dialogue acts' and 'state' according to original annotation.\n\t+ Put dialogue level kb into system side 'db\\_results'.\n\t+ Skip repeated turns and empty dialogue.\n* Annotations:\n\t+ user dialogue acts, state, db\\_results.", "### Supported Tasks and Leaderboards\n\n\nNLU, DST, Context-to-response", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n3 domains: ['schedule', 'weather', 'navigate']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nTODO" ]
[ 50, 190, 23, 5, 75, 8 ]
[ "passage: TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-1K<n<10K #language-English #arxiv-1705.05414 #region-us \n### Dataset Summary\n\n\nIn an effort to help alleviate this problem, we release a corpus of 3,031 multi-turn dialogues in three distinct domains appropriate for an in-car assistant: calendar scheduling, weather information retrieval, and point-of-interest navigation. Our dialogues are grounded through knowledge bases ensuring that they are versatile in their natural language without being completely free form.\n\n\n* How to get the transformed data from original data:\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Create user 'dialogue acts' and 'state' according to original annotation.\n\t+ Put dialogue level kb into system side 'db\\_results'.\n\t+ Skip repeated turns and empty dialogue.\n* Annotations:\n\t+ user dialogue acts, state, db\\_results.### Supported Tasks and Leaderboards\n\n\nNLU, DST, Context-to-response### Languages\n\n\nEnglish### Data Splits\n\n\n\n3 domains: ['schedule', 'weather', 'navigate']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.### Licensing Information\n\n\nTODO" ]
745c1796cfe209b469394567f496815d2bc495d2
# Dataset Card for DailyDialog - **Repository:** http://yanran.li/dailydialog - **Paper:** https://arxiv.org/pdf/1710.03957.pdf - **Leaderboard:** None - **Who transforms the dataset:** Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install [ConvLab-3](https://github.com/ConvLab/ConvLab-3) platform first. Then you can load the dataset via: ``` from convlab.util import load_dataset, load_ontology, load_database dataset = load_dataset('dailydialog') ontology = load_ontology('dailydialog') database = load_database('dailydialog') ``` For more usage please refer to [here](https://github.com/ConvLab/ConvLab-3/tree/master/data/unified_datasets). ### Dataset Summary DailyDialog is a high-quality multi-turn dialog dataset. It is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. - **How to get the transformed data from original data:** - Download [ijcnlp_dailydialog.zip](http://yanran.li/files/ijcnlp_dailydialog.zip). - Run `python preprocess.py` in the current directory. - **Main changes of the transformation:** - Use `topic` annotation as `domain`. If duplicated dialogs are annotated with different topics, use the most frequent one. - Use `intent` annotation as `binary` dialogue act. - Retain emotion annotation in the `emotion` field of each turn. - Use nltk to remove space before punctuation: `utt = ' '.join([detokenizer.detokenize(word_tokenize(s)) for s in sent_tokenize(utt)])`. - Replace `" ’ "` with `"'"`: `utt = utt.replace(' ’ ', "'")`. - Add space after full-stop - **Annotations:** - intent, emotion ### Supported Tasks and Leaderboards NLU, NLG ### Languages English ### Data Splits | split | dialogues | utterances | avg_utt | avg_tokens | avg_domains | cat slot match(state) | cat slot match(goal) | cat slot match(dialogue act) | non-cat slot span(dialogue act) | |------------|-------------|--------------|-----------|--------------|---------------|-------------------------|------------------------|--------------------------------|-----------------------------------| | train | 11118 | 87170 | 7.84 | 11.22 | 1 | - | - | - | - | | validation | 1000 | 8069 | 8.07 | 11.16 | 1 | - | - | - | - | | test | 1000 | 7740 | 7.74 | 11.36 | 1 | - | - | - | - | | all | 13118 | 102979 | 7.85 | 11.22 | 1 | - | - | - | - | 10 domains: ['Ordinary Life', 'School Life', 'Culture & Education', 'Attitude & Emotion', 'Relationship', 'Tourism', 'Health', 'Work', 'Politics', 'Finance'] - **cat slot match**: how many values of categorical slots are in the possible values of ontology in percentage. - **non-cat slot span**: how many values of non-categorical slots have span annotation in percentage. ### Citation ``` @InProceedings{li2017dailydialog, author = {Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi}, title = {DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset}, booktitle = {Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)}, year = {2017} } ``` ### Licensing Information [**CC BY-NC-SA 4.0**](https://creativecommons.org/licenses/by-nc-sa/4.0/)
ConvLab/dailydialog
[ "task_categories:conversational", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "license:cc-by-nc-sa-4.0", "arxiv:1710.03957", "region:us" ]
2022-06-28T01:07:17+00:00
{"language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "DailyDialog"}
2022-11-25T09:06:49+00:00
[ "1710.03957" ]
[ "en" ]
TAGS #task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-nc-sa-4.0 #arxiv-1710.03957 #region-us
Dataset Card for DailyDialog ============================ * Repository: URL * Paper: URL * Leaderboard: None * Who transforms the dataset: Qi Zhu(zhuq96 at gmail dot com) To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via: For more usage please refer to here. ### Dataset Summary DailyDialog is a high-quality multi-turn dialog dataset. It is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. * How to get the transformed data from original data: + Download ijcnlp\_dailydialog.zip. + Run 'python URL' in the current directory. * Main changes of the transformation: + Use 'topic' annotation as 'domain'. If duplicated dialogs are annotated with different topics, use the most frequent one. + Use 'intent' annotation as 'binary' dialogue act. + Retain emotion annotation in the 'emotion' field of each turn. + Use nltk to remove space before punctuation: 'utt = ' '.join([detokenizer.detokenize(word\_tokenize(s)) for s in sent\_tokenize(utt)])'. + Replace '" ’ "' with '"'"': 'utt = utt.replace(' ’ ', "'")'. + Add space after full-stop * Annotations: + intent, emotion ### Supported Tasks and Leaderboards NLU, NLG ### Languages English ### Data Splits 10 domains: ['Ordinary Life', 'School Life', 'Culture & Education', 'Attitude & Emotion', 'Relationship', 'Tourism', 'Health', 'Work', 'Politics', 'Finance'] * cat slot match: how many values of categorical slots are in the possible values of ontology in percentage. * non-cat slot span: how many values of non-categorical slots have span annotation in percentage. ### Licensing Information CC BY-NC-SA 4.0
[ "### Dataset Summary\n\n\nDailyDialog is a high-quality multi-turn dialog dataset. It is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information.\n\n\n* How to get the transformed data from original data:\n\t+ Download ijcnlp\\_dailydialog.zip.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Use 'topic' annotation as 'domain'. If duplicated dialogs are annotated with different topics, use the most frequent one.\n\t+ Use 'intent' annotation as 'binary' dialogue act.\n\t+ Retain emotion annotation in the 'emotion' field of each turn.\n\t+ Use nltk to remove space before punctuation: 'utt = ' '.join([detokenizer.detokenize(word\\_tokenize(s)) for s in sent\\_tokenize(utt)])'.\n\t+ Replace '\" ’ \"' with '\"'\"': 'utt = utt.replace(' ’ ', \"'\")'.\n\t+ Add space after full-stop\n* Annotations:\n\t+ intent, emotion", "### Supported Tasks and Leaderboards\n\n\nNLU, NLG", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n10 domains: ['Ordinary Life', 'School Life', 'Culture & Education', 'Attitude & Emotion', 'Relationship', 'Tourism', 'Health', 'Work', 'Politics', 'Finance']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY-NC-SA 4.0" ]
[ "TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-nc-sa-4.0 #arxiv-1710.03957 #region-us \n", "### Dataset Summary\n\n\nDailyDialog is a high-quality multi-turn dialog dataset. It is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information.\n\n\n* How to get the transformed data from original data:\n\t+ Download ijcnlp\\_dailydialog.zip.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Use 'topic' annotation as 'domain'. If duplicated dialogs are annotated with different topics, use the most frequent one.\n\t+ Use 'intent' annotation as 'binary' dialogue act.\n\t+ Retain emotion annotation in the 'emotion' field of each turn.\n\t+ Use nltk to remove space before punctuation: 'utt = ' '.join([detokenizer.detokenize(word\\_tokenize(s)) for s in sent\\_tokenize(utt)])'.\n\t+ Replace '\" ’ \"' with '\"'\"': 'utt = utt.replace(' ’ ', \"'\")'.\n\t+ Add space after full-stop\n* Annotations:\n\t+ intent, emotion", "### Supported Tasks and Leaderboards\n\n\nNLU, NLG", "### Languages\n\n\nEnglish", "### Data Splits\n\n\n\n10 domains: ['Ordinary Life', 'School Life', 'Culture & Education', 'Attitude & Emotion', 'Relationship', 'Tourism', 'Health', 'Work', 'Politics', 'Finance']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage.", "### Licensing Information\n\n\nCC BY-NC-SA 4.0" ]
[ 62, 305, 15, 5, 115, 13 ]
[ "passage: TAGS\n#task_categories-conversational #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-cc-by-nc-sa-4.0 #arxiv-1710.03957 #region-us \n### Dataset Summary\n\n\nDailyDialog is a high-quality multi-turn dialog dataset. It is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information.\n\n\n* How to get the transformed data from original data:\n\t+ Download ijcnlp\\_dailydialog.zip.\n\t+ Run 'python URL' in the current directory.\n* Main changes of the transformation:\n\t+ Use 'topic' annotation as 'domain'. If duplicated dialogs are annotated with different topics, use the most frequent one.\n\t+ Use 'intent' annotation as 'binary' dialogue act.\n\t+ Retain emotion annotation in the 'emotion' field of each turn.\n\t+ Use nltk to remove space before punctuation: 'utt = ' '.join([detokenizer.detokenize(word\\_tokenize(s)) for s in sent\\_tokenize(utt)])'.\n\t+ Replace '\" ’ \"' with '\"'\"': 'utt = utt.replace(' ’ ', \"'\")'.\n\t+ Add space after full-stop\n* Annotations:\n\t+ intent, emotion### Supported Tasks and Leaderboards\n\n\nNLU, NLG### Languages\n\n\nEnglish### Data Splits\n\n\n\n10 domains: ['Ordinary Life', 'School Life', 'Culture & Education', 'Attitude & Emotion', 'Relationship', 'Tourism', 'Health', 'Work', 'Politics', 'Finance']\n\n\n* cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.\n* non-cat slot span: how many values of non-categorical slots have span annotation in percentage." ]
8f53b96a741cf26c07b36f86d0b3bda066786d51
This dataset fit on the streaming mode. The origin dataset link: https://data.commoncrawl.org/crawl-data/CC-MAIN-2022-27/warc.paths.gz _Requirements: selectolax, warcio_ ``` from datasets import load_dataset # sub name is the number has string type e.g. "1", "2", ...(it depends on the dataset) dataset = load_dataset("psyche/common_crawl", "1", streaming=True) ```
psyche/common_crawl
[ "license:apache-2.0", "region:us" ]
2022-06-28T02:45:14+00:00
{"license": ["apache-2.0"]}
2023-09-14T23:50:38+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
This dataset fit on the streaming mode. The origin dataset link: URL _Requirements: selectolax, warcio_
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
fd907148d4cfaaadad98cd8d39b967ecf95bd094
# AFQMC Download from https://www.cluebenchmarks.com/introduce.html ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970): If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970): ```text @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ``` 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/): You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/): ```text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, }
IDEA-CCNL/AFQMC
[ "license:apache-2.0", "arxiv:2209.02970", "region:us" ]
2022-06-28T05:25:33+00:00
{"license": "apache-2.0"}
2023-04-06T05:32:35+00:00
[ "2209.02970" ]
[]
TAGS #license-apache-2.0 #arxiv-2209.02970 #region-us
# AFQMC Download from URL ## 引用 Citation 如果您在您的工作中使用了我们的模型,可以引用我们的论文: If you are using the resource for your work, please cite the our paper: 也可以引用我们的网站: You can also cite our website: '''text @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{URL }
[ "# AFQMC\n\nDownload from URL", "## 引用 Citation\n\n如果您在您的工作中使用了我们的模型,可以引用我们的论文:\n\nIf you are using the resource for your work, please cite the our paper:\n\n\n\n也可以引用我们的网站:\n\nYou can also cite our website:\n\n'''text\n@misc{Fengshenbang-LM,\n title={Fengshenbang-LM},\n author={IDEA-CCNL},\n year={2021},\n howpublished={\\url{URL\n}" ]
[ "TAGS\n#license-apache-2.0 #arxiv-2209.02970 #region-us \n", "# AFQMC\n\nDownload from URL", "## 引用 Citation\n\n如果您在您的工作中使用了我们的模型,可以引用我们的论文:\n\nIf you are using the resource for your work, please cite the our paper:\n\n\n\n也可以引用我们的网站:\n\nYou can also cite our website:\n\n'''text\n@misc{Fengshenbang-LM,\n title={Fengshenbang-LM},\n author={IDEA-CCNL},\n year={2021},\n howpublished={\\url{URL\n}" ]
[ 22, 7, 100 ]
[ "passage: TAGS\n#license-apache-2.0 #arxiv-2209.02970 #region-us \n# AFQMC\n\nDownload from URL## 引用 Citation\n\n如果您在您的工作中使用了我们的模型,可以引用我们的论文:\n\nIf you are using the resource for your work, please cite the our paper:\n\n\n\n也可以引用我们的网站:\n\nYou can also cite our website:\n\n'''text\n@misc{Fengshenbang-LM,\n title={Fengshenbang-LM},\n author={IDEA-CCNL},\n year={2021},\n howpublished={\\url{URL\n}" ]
c98514ddfefb894d36c99df70f4b20abd35aad30
### Takedown notices received by the Hugging Face team Please click on Files and versions to browse them Also check out our: - [Terms of Service](https://huggingface.co/terms-of-service) - [Community Code of Conduct](https://huggingface.co/code-of-conduct) - [Content Guidelines](https://huggingface.co/content-guidelines)
huggingface-legal/takedown-notices
[ "license:cc-by-nc-nd-4.0", "legal", "region:us" ]
2022-06-28T08:04:19+00:00
{"license": "cc-by-nc-nd-4.0", "tags": ["legal"]}
2023-11-21T16:40:37+00:00
[]
[]
TAGS #license-cc-by-nc-nd-4.0 #legal #region-us
### Takedown notices received by the Hugging Face team Please click on Files and versions to browse them Also check out our: - Terms of Service - Community Code of Conduct - Content Guidelines
[ "### Takedown notices received by the Hugging Face team\n\nPlease click on Files and versions to browse them\n\nAlso check out our:\n- Terms of Service\n- Community Code of Conduct\n- Content Guidelines" ]
[ "TAGS\n#license-cc-by-nc-nd-4.0 #legal #region-us \n", "### Takedown notices received by the Hugging Face team\n\nPlease click on Files and versions to browse them\n\nAlso check out our:\n- Terms of Service\n- Community Code of Conduct\n- Content Guidelines" ]
[ 21, 45 ]
[ "passage: TAGS\n#license-cc-by-nc-nd-4.0 #legal #region-us \n### Takedown notices received by the Hugging Face team\n\nPlease click on Files and versions to browse them\n\nAlso check out our:\n- Terms of Service\n- Community Code of Conduct\n- Content Guidelines" ]
a968e7aee0602e257935f1321a02e4287f7d5848
# Dataset Card for DIALOGSum Corpus ## Dataset Description ### Links - **Homepage:** https://aclanthology.org/2021.findings-acl.449 - **Repository:** https://github.com/cylnlp/dialogsum - **Paper:** https://aclanthology.org/2021.findings-acl.449 - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics. ### Languages English ## Dataset Structure ### Data Instances DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation. The first instance in the training set: {'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - topic: human written topic/one liner of the dialogue. - id: unique file id of an example. ### Data Splits - train: 12460 - val: 500 - test: 1500 - holdout: 100 [Only 3 features: id, dialogue, topic] ## Dataset Creation ### Curation Rationale In paper: We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers. Compared with previous datasets, dialogues from DialogSum have distinct characteristics: Under rich real-life scenarios, including more diverse task-oriented scenarios; Have clear communication patterns and intents, which is valuable to serve as summarization sources; Have a reasonable length, which comforts the purpose of automatic summarization. We ask annotators to summarize each dialogue based on the following criteria: Convey the most salient information; Be brief; Preserve important named entities within the conversation; Be written from an observer perspective; Be written in formal language. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information CC BY-NC-SA 4.0 ## Citation Information ``` @inproceedings{chen-etal-2021-dialogsum, title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset", author = "Chen, Yulong and Liu, Yang and Chen, Liang and Zhang, Yue", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.449", doi = "10.18653/v1/2021.findings-acl.449", pages = "5062--5074", ``` ## Contributions Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset.
knkarthick/dialogsum
[ "task_categories:summarization", "task_categories:text2text-generation", "task_categories:text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "dialogue-summary", "one-liner-summary", "meeting-title", "email-subject", "region:us" ]
2022-06-28T09:17:20+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization", "text2text-generation", "text-generation"], "task_ids": [], "pretty_name": "DIALOGSum Corpus", "tags": ["dialogue-summary", "one-liner-summary", "meeting-title", "email-subject"]}
2023-10-03T09:56:21+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #dialogue-summary #one-liner-summary #meeting-title #email-subject #region-us
# Dataset Card for DIALOGSum Corpus ## Dataset Description ### Links - Homepage: URL - Repository: URL - Paper: URL - Point of Contact: URL ### Dataset Summary DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics. ### Languages English ## Dataset Structure ### Data Instances DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation. The first instance in the training set: {'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - topic: human written topic/one liner of the dialogue. - id: unique file id of an example. ### Data Splits - train: 12460 - val: 500 - test: 1500 - holdout: 100 [Only 3 features: id, dialogue, topic] ## Dataset Creation ### Curation Rationale In paper: We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers. Compared with previous datasets, dialogues from DialogSum have distinct characteristics: Under rich real-life scenarios, including more diverse task-oriented scenarios; Have clear communication patterns and intents, which is valuable to serve as summarization sources; Have a reasonable length, which comforts the purpose of automatic summarization. We ask annotators to summarize each dialogue based on the following criteria: Convey the most salient information; Be brief; Preserve important named entities within the conversation; Be written from an observer perspective; Be written in formal language. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information CC BY-NC-SA 4.0 ## Contributions Thanks to @cylnlp for adding this dataset.
[ "# Dataset Card for DIALOGSum Corpus", "## Dataset Description", "### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.\nThe first instance in the training set:\n{'id': 'train_0', 'summary': \"Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.\", 'dialogue': \"#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\\n#Person2#: I found it would be a good idea to get a check-up.\\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\\n#Person2#: Ok.\\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\\n#Person2#: Yes.\\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\\n#Person2#: Ok, thanks doctor.\", 'topic': \"get a check-up}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- topic: human written topic/one liner of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 12460\n- val: 500\n- test: 1500\n- holdout: 100 [Only 3 features: id, dialogue, topic]", "## Dataset Creation", "### Curation Rationale\nIn paper:\nWe collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.\n\nCompared with previous datasets, dialogues from DialogSum have distinct characteristics:\n\nUnder rich real-life scenarios, including more diverse task-oriented scenarios;\nHave clear communication patterns and intents, which is valuable to serve as summarization sources;\nHave a reasonable length, which comforts the purpose of automatic summarization.\n\nWe ask annotators to summarize each dialogue based on the following criteria:\nConvey the most salient information;\nBe brief;\nPreserve important named entities within the conversation;\nBe written from an observer perspective;\nBe written in formal language.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "## Licensing Information\nCC BY-NC-SA 4.0", "## Contributions\nThanks to @cylnlp for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #dialogue-summary #one-liner-summary #meeting-title #email-subject #region-us \n", "# Dataset Card for DIALOGSum Corpus", "## Dataset Description", "### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.\nThe first instance in the training set:\n{'id': 'train_0', 'summary': \"Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.\", 'dialogue': \"#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\\n#Person2#: I found it would be a good idea to get a check-up.\\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\\n#Person2#: Ok.\\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\\n#Person2#: Yes.\\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\\n#Person2#: Ok, thanks doctor.\", 'topic': \"get a check-up}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- topic: human written topic/one liner of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 12460\n- val: 500\n- test: 1500\n- holdout: 100 [Only 3 features: id, dialogue, topic]", "## Dataset Creation", "### Curation Rationale\nIn paper:\nWe collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.\n\nCompared with previous datasets, dialogues from DialogSum have distinct characteristics:\n\nUnder rich real-life scenarios, including more diverse task-oriented scenarios;\nHave clear communication patterns and intents, which is valuable to serve as summarization sources;\nHave a reasonable length, which comforts the purpose of automatic summarization.\n\nWe ask annotators to summarize each dialogue based on the following criteria:\nConvey the most salient information;\nBe brief;\nPreserve important named entities within the conversation;\nBe written from an observer perspective;\nBe written in formal language.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "## Licensing Information\nCC BY-NC-SA 4.0", "## Contributions\nThanks to @cylnlp for adding this dataset." ]
[ 133, 10, 4, 23, 52, 5, 6, 428, 46, 35, 5, 256, 12, 11, 12, 17 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-text2text-generation #task_categories-text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #dialogue-summary #one-liner-summary #meeting-title #email-subject #region-us \n# Dataset Card for DIALOGSum Corpus## Dataset Description### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL### Dataset Summary\nDialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.### Languages\nEnglish## Dataset Structure" ]
51ee8e22888b3aafb4a2601796c76c8fd750ebfd
# Dataset Card for AMI Corpus ## Dataset Description ### Links - **Homepage:** https://groups.inf.ed.ac.uk/ami/corpus/ - **Repository:** https://groups.inf.ed.ac.uk/ami/download/ - **Paper:** https://groups.inf.ed.ac.uk/ami/corpus/overview.shtml - **Point of Contact:** https://huggingface.co/knkarthick ### Dataset Summary The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section. #### Synchronised recording devices: close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, individual pens. #### Annotation: orthographic transcription, annotations for many different phenomena (dialog acts, head movement etc. ). Although the AMI Meeting Corpus was created for the uses of a consortium that is developing meeting browsing technology, it is designed to be useful for a wide range of research areas. The downloads on this website include videos that are suitable for most purposes, but higher resolution videos are available for researchers engaged in video processing. All of the signals and transcription, and some of the annotations, have been released publicly under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0). ### Languages English ## Dataset Structure ### Data Instances AMI Corpus is a meeting summarization dataset, consisting of 279 dialogues split into train, test and validation. The first instance in the training set: {'id': '30', 'summary': "The project manager opens the meeting by stating that they will address functional design and then going over the agenda. The industrial designer gives his presentation, explaining how remote controls function and giving personal preference to a clear, simple design that upgrades the technology as well as incorporates the latest features in chip design. The interface specialist gives her presentation next, addressing the main purpose of a remote control. She pinpoints the main functions of on/off, channel-switching, numbers for choosing particular channels, and volume; and also suggests adding a menu button to change settings such as brightness on the screen. She gives preference to a remote that is small, easy to use, and follows some conventions. The group briefly discusses the possibility of using an LCD screen if cost allows it, since it is fancy and fashionable. The marketing expert presents, giving statistical information from a survey of 100 subjects. She prefers a remote that is sleek, stylish, sophisticated, cool, beautiful, functional, solar-powered, has long battery life, and has a locator. They discuss the target group, deciding it should be 15-35 year olds. After they talk about features they might include, the project manager closes the meeting by allocating tasks.", 'dialogue': "Speaker A: Cool. Do you wanna give me the little cable thing? Yeah. Cool. Ah, that's why it won't meet. Okay, cool. Yep, cool. Okay, functional requirements. Alright, yeah. It's working. Cool, okay. So what I have, wh where I've got my information from is a survey where the usability lab um observed remote control use with um a hundred subjects and then they gave them a questionnaire. Um so it was all about, you know, how people feel about the look and feel of the remote control, you know. What's the most annoying things about remote controls and um the possibility of speech recognition and L_C_D_ screens in remote control. Not that they actually gave me any answers on the L_C_D_ screens, so I should have taken that bit out, but anyway. Um okay, so. What they found is that people don't like how current remote controls are, so you know, definitely you should be looking at something quite different. Um seventy five percent of users find most remote controls ugly. Uh the other twenty five percent have no fashion sense. Uh eighty percent of users would spend more to get um you know, a nice looking remote control. Um current remote controls, they don't match the user behaviour well, as you'll see on the next slide. Um I dunno what zapping is, but Oh, right. But you have that little thing that comes up at the bottom and tells you what's on. Um okay, fifty percent of users say they only use ten percent of the buttons, so that's going back to what, you know, we were saying earlier about, you know, do you need all the buttons on the remote control, they just make it look ugly. Okay? Cool. Um so this is my little graph thing. Mm k Okay, well, I can send it to all of you. What it is is um it's cones, 'cause I thought they'd be more exciting. Um but ooh where's it go? Back. Oh. Oh yes, cool. Okay, I'm gonna stop playing with the little pointy thing. Um okay, so like what it shows is how much things are used relatively and what you can clearly see from that is the thing that's used most is the channel selection. What you can't see is volume selection, it's a little bit higher than all the others. Yeah, so what the graph shows is that, you know, power, channel selection and volume selection are important, and the rest of them, you know, nobody really uses and so that's the the numbers along the top represent their like um their importance, you know, so on a scale of one to ten, how important is that and, you know, channel selection and volume selection are absolutely essential, and the power, well it's not quite so essential, apparently, although I don't understand how it couldn't be, um and everything else, I think, you know, you can forget about having those buttons on the remote control, 'cause they're just not needed, and they're not used. Okay. This is the bit that the email messed up for me and that's what I was fiddling about with at the beginning of the thing. Okay, cool. So um okay, so this is what people find annoying about remote controls. Uh that they get lost, that the uh you know, they're not intuitive and that they're bad for repetitive strain injury. I think if you're watching enough T_V_ to get repetitive strain injury from um you know, watching T_V_, then that's the least of your problems, but you know, it's up there. Um that yeah. Okay, so um I mean the the R_S_I_ thing would be that, like when you have the computer keyboards and you keep your wrists up would be something that encourages you want something with an ergonomic t design that encourages good use of the remote control and you know, not straining your wrists watching T_V_. Yes. Okay, cool. Right, um sorry this is pink because I was copying and pasting the table, and I didn't have time to white it out again. Um okay, but that shows how people whether they would pay more for voice recognition software. So you can see from that that, you know, younger people to the age of thirty five are quite likely to pay quite a lot more f well quite are quite likely to pay more for voice recognition software, whereas as people get older, they're a bit more sceptical about it and they're less willing to to try it. Um so clearly voice recognition is something to think about, but um you know I d I do wonder how well that would work given that a T_V_, you know, tends to be people talking and um, you know, how are you going to stop it from just flipping channels whilst watching T_V_. Um okay? Cool. Um okay, so these are my personal preferences. So you have sleek, stylish, sophisticated, you know, so something that's, you know, a bit cool. Um you know, functional, so it's useful, but minimalist. Um there's a there's an important thing that, you know, people use when, you know, when you're filling up your home, you know, a lot of people fill up their home with bits of crap, basically, you know, and you've got all this stuff, and you're just like, what the hell is that, who is ever gonna use it? You know, so things should either be functional or beautiful or preferably both, so I think we need to aim for both. Um okay, then a long battery life, like you were talking about earlier and um, you know, I was thinking that solar power would be quite cool because, you know, your remote control just sits there, and you could just sit it in the sunshine and save the environment a bit. Um and then like a locator, so you know, kind of like you have for a mobile phone or not a mobile phone Yeah, that's it, you know. I know, it's weird. My flatmate and I were talking about this on the way into uni this morning and I was like I need to get one for everything. So yeah, so maybe something where you clap and then it beeps, something a kind of sound that you don't often hear on the T_V_, you know, 'cause you don't want your remote control beeping every five minutes, 'cause you you'd then deliberately lose it by throwing it out the window or something. So okay? Cool. That's me. Cat's. Ca. Yeah, I mean that's the thing is that it didn't say in the survey, you know, whether, you know, these are the people that will pay more for a more stylish remote control, but I'm assuming, you know, yes. Well, that's when you go to uni, isn't it? So, you know Yeah. Oh, I've unplugged it. Do you want me to Yeah. Seventy six point three percent. Yeah. Yeah, I kn I mean I know what you're saying about the fifteen to twenty five year olds, but I mean it has been proven that that people of that age group have a higher disposable income because they don't have like I mean, you know, if you're at university, you're paying your rent, but you don't have a mortgage, you don't have a life insurance policy, you don't normally have a car, yeah, so. You're still learning to drive actually, so that just costs more than a car, but yeah. Um so I mean like it is an age group to target, really, I think. No, I mean that's what, that's like fifteen Pounds? You know, I think Yeah, I d I don't know many people without a T_V_. We didn't have a T_V_ last year, and everyone thought we were off our heads, you know. Yeah, I d well we've we've got quite a d decent T_V_. Yeah. I think I think the fact that, you know, ninety one point two percent of fifteen to twenty five year olds are saying yes, I would pay more for a voice recognition remote control, does say quite a lot really. You know, so I mean that and the disposable income and I don't think it's something to ignore, you know. Is not a massive difference, you know. No, do totally. You do have it in your mobile phone though, don't you? Because you have like I mean every mobile phone now has like call this person and it calls them. I don't know. Yeah. S so y you'd maybe need a code word. Do you know what I mean? So like when you say change, except that's being said quite a lot on T_V_, so maybe like, you know, remote. I mean how often do people say remote on T_V_? Although I only watch Charmed, so really I wouldn't know but like so you'd just say remote five, you know, remote ten, remote one two nine. I don't think there's a lot of uh voice recognition remote controls. Yeah, that would be another way to do it. Yeah, but then the code word would be even more important, because I mean Sky advertise on every channel, don't they, you know, so then it would be you'd be watching Charmed, and then the Sky advert would come on and it would change to Sky. Yeah, yeah, and that would be really annoying. Yeah. Do you not think that defeats the object of having voice recognition on a remote control though? Yeah, you know, so you have to have the remote control. It's more like if you lost it and it's down the sofa sometime, you can yell at it and it'll just change it, you can look for it later, yeah. Yeah, yeah, I suppose nearer to you but a b like if you have surround sound then Yeah. Yeah, 'cause it's it's quite important that you don't lose the the bit to locate the remote control. Yeah, definitely, yeah. Oh, so y you want our um PowerPoint presentations in there, hey? Okay. There you go. But is everyone's called functional requirements? Okay, so that's good. That's me done. Okay, cool.\r\nSpeaker B: No. Mm. Um um wi on on a what? Oh project project documents, yeah, yeah, yeah, okay. Oh okay, yeah. Yes, I think so. Yeah, the last minute, yeah, yeah. Yeah. Um Okay. Hmm. Mm. Okay, yeah, afterwards, yeah, okay. Thanks. I think we need like some general discussion at the end probably. Yeah. Yeah, I think since since we were discussing some um design issues then I I I would like to continue okay, yeah. Thanks. Oh i Okay, I hope wait. Should it just There's just nothing. Oh right, right, right, um Okay. Nothin okay, something is coming up. No signal? Why? Oh. My my computer went blank now. Adjusting. But I don't see anything I don't see anything on my computer now. This is the problem, but Um. Uh now it's okay. No? No. Oh okay. Okay, that's fine, that's good. Okay, let's start from the beginning. So I'm going to speak about technical functions design uh just like some some first issues that came up. Um 'kay, so the method I was um adopting at this point, it's not um for the for the whole um period of the um all the project but it's just at th at this very moment. Um uh my method was um to look at um other um remote controls, uh so mostly just by searching on the web and to see what um functionality they used. And then um after having got this inspiration and having compared what I found on the web um just to think about what the de what the user really needs and what um what the user might desire as additional uh functionalities. And yeah, and then just to um put the main function of the remote control in in words. Um so the findings uh were um that the main function of the remote control is is just sending messages to the television set, so this quite straightforward. And uh w some of the main functions would be switching on, switching off, uh then the user would like to switch the channel um for example just m changing to the next channel to to flip through all all of the possible channels, or then mm uh the other possibility would be that um she might just want to choose one particular channel, so we would need the numbers. And and also the volume is very important. Um um I als okay. 'Kay. Um um among the findings I found that m m most of the curr mm presently available remote controls also include other mm functionalities um in their design, like operating a V_C_R_, but they don't seem to be able to deal with D_V_D_ players, but then there are surely there are many other functionali functions that could possibly be added to them, but according to the last minute update um actually um we do not want to have all this complicated functions added to our design. So my personal preferences would be uh to keep the mm the whole remote control small um just like the physical size. And then it must be easy to use, so it must follow some conventions um like whereabouts you find the on off button and maybe the colour tends to be red or something. Um then yeah, the must-have buttons would be on off and then the channel numbers and then um the one that allows us to go to the next or the previous channel, and then volume has to be there. But then um other functionalities um could be just uh there could be a menu button and you could change things on the screen then, um for example brightness and mm similar functions could be just um done through the menu. And yeah, the last question I had about whether we wanted to incorporate n uh more functionalities, the answer was already no because of the last minute update. So at the for the time being that's uh that's all. If you have questions Yeah, and also it's it's um other question is uh because there are so many different And there are so many different things that could possibly be included because besides video and D_V_D_ there are the mm um video C_D_s and whatever, so it might be problematic to to choose between all these possible things. Um well, I think the buttons are still mm kind of the most um easy for the user to use, I mean um what other options would you have? A little screen or something, but this would be really kind of I think a lot of learning for the user and and I mean the user just wants to get um get a result um quickly, not to spend time in like um giving several orders um I dunno. I think I th I would I would think the put the buttons, but if if you have other mm proposals um. Yeah. Yeah. Mm-hmm. Yep. Uh am I going in the right direction? No. Wait. Okay, here it comes. Okay, here you are. Um that's very good, very interesting. Mm-hmm. Yeah. Yeah, you share a television or something that yeah. It was seventy something, yeah, yeah. Yeah this this is not unaffordable, but the problem is whether people need it, whether they do have a T_V_ to use its full Yeah. Common, the students yeah, yeah. The s the stu yeah, and the remote control might not yeah, it might not even function with the old T_V_. Yeah, we're still yeah. Or w maybe we can just kind of uh uh Yeah, but at the same time I think maybe we can we can just decide to to have both of these groups as our target, because actually I mean they're all still re young people. Yeah. Yeah. Yeah. Yeah. An Yeah. Yeah. Yeah but uh um Yeah, yeah sure, yeah, yeah. Yeah. Yeah, w well now the v the voice recognition if if it works wonderfully w we could possibly do away with all buttons, but I think this is not really the right moment yet, because people are just so used to buttons and um, yeah it's it's kind of safer, so we we need both, so the voice recognition would be just an extra, it wouldn't really reduce the size of the remote. Yeah but m but on the other hand, remote control isn't as close to you you probably might just just uh speak into it and and the T_V_ would be already further away, so it might not pick up the other things coming from there. Yeah, but then the remote control I think I mean um the idea is kind of it's it's not that it's sitting there on on top of the television, because then you could already yell at the television and you wouldn't you you wouldn't need the remote control, so the remote control is still something you keep n near yourself. Yeah, yeah, yeah. No, but I I I was just defending the the fact why why we want to keep the remote control close to us, a and uh not to yell at it from the distance. Okay. Oh yeah, yeah. Okay, yeah, mm-hmm. The major ones, yeah. Mm-hmm. Mm-hmm. Yeah. Did you find it? It's just yeah, yeah. Oh so so we'll just put them i there, we we yeah, w we won't even okay. Yeah. Yeah. Uh something conceptual, yeah. Hmm. Sorry, but um the next meeting um are we going to have it um right after lunch or shall we prepare our To prepare, okay, yeah, that's good. Okay. Cool. Okay, see you.\r\nSpeaker C: Mm. You said uh targ target groups, what does that mean? Uh okay, 'kay. So are Okay. Alright. I can go first, yeah. Right. Um so f from the Right sure. Uh okay. So n uh with uh with regard to the uh working design of this uh uh remote control uh I've identified um a few basic uh components of the remote and uh se uh from the design, functional design perspective um w I c we can now uh know wha what exactly the components are and how how they work together with each other. So this is the method that uh I'll mostly be following in my um in my uh role. Um the identification of the components, uh and uh since since I'm dealing only with the technical aspects, I would need feedback from the marketing person uh and uh from the user interface person. Uh we'll then integrate this into the product design at a technical level and uh basically update and come up with a new design, so it's a cyclical process. Okay, so these were the basic findings from today. The last three bullets have been integrated from uh the last minute uh email. Uh I just quickly jotted them down. Um so basically uh the as I told you the identification of how the remote control works and what are the various parts to it uh and what are the different processes um and how the parts uh communicate with each other. Um okay, so e the mee email said that teletext is now outdated, so we need to do away with that functionality of the remote control. Um also uh the remote control should be used only for television, because incorporating other features um makes it more comp complex. And the reason why teletext is outdated because uh of internet and uh the availability of internet over television. How however, our our remote control would only be dealing uh with the the use for television, in order to keep things simple. Um also the management wants that um our design should be unique uh it so it should incorporate um colour and the slogan uh that our company um has it as its standard. Okay, so he he here is a functional overview of the remote control. Um there's basically an energy source at the heart uh which feeds into the chip and the user interface. The user interf interface communicates with the chip, so I'll basic go over to the Okay. So if uh if this is our energy source and this is a cell, uh it communicates uh it feeds energy into the into the chip, which basically finds out h uh how how to do everything. There is a user interface here. So whe when the user presses a button, it feeds into the chip and the chip then generates a response and takes the response to an infrared terminal, um which then so the output of the chip is an infrared bit code, which is then communicated to the remote site, which h has an infrared receiver. Um the there can be uh a bulb here or something to indicate whether the remote is on or communicating. Um so these are the essent so a all the functionality of the remote control, whatever new functions that we need to do, um make the chip more complicated uh and bigger, basically. Okay. Um so i in my personal preferences um I'm hoping that we can ke keep the design as simple and clear as possible. This would uh help us uh to upgrade our technology at a future point of time. And uh also if we can incorporate uh the latest features in our chip design, so that our um uh remote control does not become outdated soon and it's compatible with mot most uh televisions. That's about it. So anything that you would like to know or No, I don't have any idea about what each component costs. Um yeah. Anything else? Yeah. Certainly, yeah. So so tha yeah, we definitely need to operate within our constraints, but um unfortunately I I do not have any data, so uh I just identified the functional components for that. Yeah, okay. Yeah. Mm 'kay. I it'll take some time. Oh, there it is, yeah. It'll come up, it um uh no signal. Yeah yeah, it says something now, adjusting Okay. Oh, that's strange. Okay. And one more time. Mm. Sorry, cou could you go back for a second? Uh switching on off channel, uh volume, okay, that's great. So in the u user interface requirements uh uh uh we we have been able to identify what are the basic buttons that we do want. Um but um so so at this stage, uh how we go about implementing those button we will not identify or I mean in we can completely do away with buttons and uh have some kind of a fancy user interface or something like that. But uh is is there any uh uh any thoughts on that? Right. Yeah, and it'll make the costs yeah. Right. Uh I think the co costs will also play a big role when we come to know about them. So well we can probably wait until t we have more knowledge on that. Uh i if the if the costs allow, we can have like an L_C_D_ display and uh with um because we do want something fancy and fashionable as well. So yeah? Cool. try to press oh, okay, yep. Mm. Right. Mm-hmm. Mm. Right. Mm-hmm. Hmm. Right. Mm. Mm. Mm. Some kind of a ring, some Right. Hmm. Okay, that's great, thanks. Mm. I think one of the very interesting things that came up in um uh Ka Kate Cat Cat's uh presentation was um uh this this issue of uh uh like voice recognition being more popular with uh younger people. So if we need to have a target group um then uh I think as far as the m motto of our company is concerned, if we want to have something sleek and uh you know, good looking uh we are better off targeting a younger audience then um you know, people who are comparatively elderly. Um. Right. Right. Bu but but the survey did say that f things like voice recognition are more popular with them, so if you want to put in something stylish, then uh th it'll certainly be more popular with this i ye with the younger people as compared to older people, yeah. Right, and Right. Mm. Right. But uh still, if if you can go back to that slide and uh, how popular was it? Oh, oh, okay. That's alright, if you can just look it up on your computer, wh uh um people between twenty five to thirty five, uh how popular was so it was sti still still quite popular amongst them. So even they are seventy six percent, is that high amount? Alright. Yeah. So you're more likely to b Yeah. Yeah. Mm. Bu but even even in the case of twenty five to thirty five it's quite popular, right? So mm uh are are are Mm. Mm. Um I was having a a general outlook on um m most like sophisticated features, but voice recognition itself I'm not very sure about, because one of the p uh things that Cat pointed out was uh uh how do we go about implementing it? Uh and uh Yeah. But how frequently do we use it anyway and um uh h ho how good is it, you know uh voice recognition softwares are still quite uh Yeah. Right. Right. Okay. O Right. Mm. Right. Yeah. Okay, so it seems like a feasible thing to implement uh for for a limited yeah. Mm. W What uh Mm. What wh uh what I was thinking is that there is this uh separation between what the channels are on T_V_ and how they are numbered on the remote control. If we can do with away with that, our product can be really popular uh in the sense that uh a person can say, I want to watch uh I_T_V_ one instead of saying that I want to go onto channel number forty five. Yeah, so if uh if something like that can be incorporated, some kind of Mm-hmm. Alright. Yeah, that's Right. Mm. Mm yeah and it might become very difficult from a distance for the television to understand what you're saying because of the noise factor for the remote control being cl I mean it'll it'll mm. Yeah. Mm. So uh wh another thing uh that can be used is that uh there can be a beeper button on the T_V_, so you can go and press that button and um and the remote control, wherever it is, it'll beep, so we we can probably come to know where it is. Right, yeah, yeah, yeah. Alright, yeah. Right. Okay. So where exactly is this i Ah, okay. Yeah. Yeah, yeah in that one, right yeah. No. Right. I guess I'll find out. Wha what was it again that I was supposed to look into? Con components, oh.\r\nSpeaker D: All hooked up. Okay, so now we are here at the functional design meeting. Um hopefully this meeting I'll be doing a little bit less talking than I did last time 'cause this is when you get to show us what you've been doing individually. The agenda for the meeting, I put it in the sh shared documents folder. I don't know if that meant that you could see it or not. Did anyone? No. Oh well. Um I'll try and do that for the next meeting as well so if you check in there, there's a shared project documents folder. Um and it should be in there. Project documents, yeah. So I'll put it in there. Is it best if I send you an email maybe, to let you know it's there? Yep. I'll do that next time. Um I'll act as secretary for this meeting and just take minutes as we go through, and then I'll send them to you after the meeting. The main the main focus of this meeting is your presentations that you've been preparing during the time, so we'll go through each of you one by one. Um then we need to briefly discuss the new project requirements that were sent to us. I just sent at the last minute, I'm sorry about that, but we can see how that affects what you were you were doing. Um and then we need to, by the end of the meeting come to some kind of decision on who our target group's going to be and what the functions of the remote control that's the the main goal is to come up with those two things, target group and functions of the remote control. And we've got forty minutes to do that in. So I would say yeah? As uh who it is that we're going to be trying to sell this thing to, yeah. So we need to yeah, we need to have a fairly defined group that that we want to focus on and then look at the functions um of the dem remote control itself. So with that I think it's best if I hand over to you. Does anyone have a preference for going first? You wanna go first? Okay, so we need to unplug my laptop and plug in yours. I assume we just pull it out? Just before you start, to make it easier, would you three mind emailing me your presentations? Once we you don't have to do it now but when once you go back, just so that I don't have to scribble everything down. Hmm. Mm-hmm. Okay. Do you have any um i idea about costs at this point? Br Okay. 'Cause that's something to consider, I guess, if we're if we're using more advanced technology, it might increase the price. Yeah. That's fine. Are there any more questions, or shall we just skip straight to the next one and then we can discuss all of them together at the end? Yeah, I think that will do. Okay, so do you want to Yes, shall shall we pull this up? I think that has to come out of there. Yeah. Yeah, I thought those last minute things, they're gonna hit you the worst. It ta takes a little Oh, and have you you need to then also press on yours, function F_ eight, so the blue function key at the bottom and F_ eight. Now it's coming, computer no signal. Maybe again? Okay, adjusting. There we go, there we go. Oh, if you press if you press function and that again there's there's usually three modes, one where it's only here, one where it's only there, and one where it's both. Okay, so one more time. Should yeah just wait for a moment, adjusting. Okay. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Yeah. If I mean that was the the directive that came through from management, but if we had a a decent case for that we really think it's important to include video and D_V_D_, I could get back to them and see. It's w it's just whether it's worth arguing about. Mm-hmm. Yeah. Mm-hmm. Okay. Are there any questions for clarification of Maarika before we go on to the next one? Mm-hmm. Mm. Mm. Mm-hmm. Sure, we can discuss that maybe after the next one. Do you want to yeah. Oh, I'm getting hungry. You set? Uh we need to do the function key thing so that it comes up on here. Hello. Is it plugged in prop it's working? Okay. Excellent. It's um switching between channels, sort of randomly going through. Mm. Ooh, that's a bit difficult to see. If you explain it to us it'll be fine. Yeah. I liked the, I liked the litt ooh come back. No. Okay. Mm-hmm, that's the next one along, yeah? Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. The remote control. Mm-hmm. That's alright. Mm. Keys and things like that, yeah. Whistle and it screams at you, yeah. Mm-hmm. That's you, excellent. Um. I'm just gonna tick yes. So, we've got about ten, fifteen minutes to discuss Mm-hmm. Yeah. Mm-hmm. Yeah. Then again I guess the th where it was most popular was the fifteen to twenty five bracket and the I don't know how often they're buying televisions. Yeah, but you don't have much money, generally. I would've thought it's it's more that twenty five to thirty five, when people are really moving out and they've got their first job and they want their nice toys and O oh it's on sorry, we unplugged it. Here, let me Yeah. Mm-hmm. Yeah. Yeah, they've got no commitments and usually not a car and all of those things. Kids. Yeah. Yeah, and if we're if we're talking twenty five Euros as a price, that's not unaffordable, even for young people. Yeah. Yeah. But do they But the T_V_s are often kind of someone's old T_V_ that's blah blah and be a bit strange to have a fancy rome remote. Mm. Yeah. Yeah. Yeah. Yeah. Yeah, if we ta if we take fifteen to thirty five, but that then does imply that we should try and incorporate voice recognition. Is that gonna have a an implication for the technical specs? Mm-hmm. Yeah. Yeah. With um but with a T_V_ remote it's gonna be quite limited if we're t saying the main things people want to do is on off channel five, louder, tha that should be relatively simple. Mm. Yeah. Mm-hmm. Yeah, but maybe if you wanna look into that just to just to check. Um, so if we go for the the fifteen to thirty five age group and then of course we're going to get th anyone who's older than thirty five who wants to look young and hip and trendy and has the money, then they'll they'll still go for the same advertising. Yeah, I think we need both. Yeah. Mm. Uh-huh. Uh-huh. So that if that was in the the voice recognition, that would be great. Yeah. Yeah. Watch Sky and yeah. Mm-hmm. But that's definitely a possibility. Yeah. So that you can yell at it, yeah. Yeah. Alright. Mm. Yeah. Yeah. Yeah. Yeah. Mm-hmm. That's but then if you're buying the remote separately, but y you could have something, but i if it was something that you could like stick onto the T_V_ or something, some like a two p if you bought it in a two part pack, so one part attaches to the T_V_. The l Well that's right, but it solves the problem of having different noises. Yeah. Okay, I think we're gonna have to wrap this up um. But if we go away with that that kind of general um specification in mind that we're looking at fifteen to thirty five year olds, we want it to look simple, but still have the buttons so it's easy to use, but only those key buttons, the major buttons and then one sort of menu one, and then voice recognition included as an option um but that obviously needs a little bit more working out as to whether it's really feasible and some of those problems we were mentioning um. What we have to do now is to go back to our little places, complete our questionnaire and some sort of summarisation, which y you'll get immediately by email. Send me your presentations so that I can use them to make the minutes, and then we've got a lunch break and after lunch we go back to our own little stations and have thirty minutes more work. Um I'll put the minutes in that project documents folder, but I'll send you an email when I do it, so that you know. It should be on your desktop, so on the yeah. So I'll put it I'll put them there as soon as I've written them. Yeah, and email them round. Yeah, that would be great. Oh yeah, put them in there. Yeah, then you don't have to email them. No, they're all called something slightly different. Technical requirements and something something, yeah. So, if you put them in there, we'll all be able to see them and refer to them if we need to. Um as to where we're going from here, you're going to look at the components concept. Yeah? Whatever that means. Yeah. You'll be looking you'll be looking at the user interface concept, on something conceptual and you're watching trends to see how we go and surely voice recognition'll fall off the map or something that um we'll keep keep our options op hmm? Components, yeah. No, we have we have after lunch we have thirty minutes to ourselves to prepare, so that's fine, w before lunch we just have to complete the questionnaire and some sort of summary. Okay? Right on time. Okay, so you can I guess we'll see you for lunch in a sec?"} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 209 - val: 42 - test: 28 ## Dataset Creation ### Curation Rationale Refer Above. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information non-commercial licence: cc-by-4.0 ## Citation Information ``` Carletta, J. (2006) Announcing the AMI Meeting Corpus. The ELRA Newsletter 11(1), January-March, p. 3-5 ``` ## Contributions Thanks to Carletta for adding this dataset.
knkarthick/AMI
[ "task_categories:summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10<n<1000", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-06-28T09:30:41+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10<n<1000"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": [], "pretty_name": "AMI Corpus"}
2022-10-24T08:16:01+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10<n<1000 #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for AMI Corpus ## Dataset Description ### Links - Homepage: URL - Repository: URL - Paper: URL - Point of Contact: URL ### Dataset Summary The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section. #### Synchronised recording devices: close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, individual pens. #### Annotation: orthographic transcription, annotations for many different phenomena (dialog acts, head movement etc. ). Although the AMI Meeting Corpus was created for the uses of a consortium that is developing meeting browsing technology, it is designed to be useful for a wide range of research areas. The downloads on this website include videos that are suitable for most purposes, but higher resolution videos are available for researchers engaged in video processing. All of the signals and transcription, and some of the annotations, have been released publicly under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0). ### Languages English ## Dataset Structure ### Data Instances AMI Corpus is a meeting summarization dataset, consisting of 279 dialogues split into train, test and validation. The first instance in the training set: {'id': '30', 'summary': "The project manager opens the meeting by stating that they will address functional design and then going over the agenda. The industrial designer gives his presentation, explaining how remote controls function and giving personal preference to a clear, simple design that upgrades the technology as well as incorporates the latest features in chip design. The interface specialist gives her presentation next, addressing the main purpose of a remote control. She pinpoints the main functions of on/off, channel-switching, numbers for choosing particular channels, and volume; and also suggests adding a menu button to change settings such as brightness on the screen. She gives preference to a remote that is small, easy to use, and follows some conventions. The group briefly discusses the possibility of using an LCD screen if cost allows it, since it is fancy and fashionable. The marketing expert presents, giving statistical information from a survey of 100 subjects. She prefers a remote that is sleek, stylish, sophisticated, cool, beautiful, functional, solar-powered, has long battery life, and has a locator. They discuss the target group, deciding it should be 15-35 year olds. After they talk about features they might include, the project manager closes the meeting by allocating tasks.", 'dialogue': "Speaker A: Cool. Do you wanna give me the little cable thing? Yeah. Cool. Ah, that's why it won't meet. Okay, cool. Yep, cool. Okay, functional requirements. Alright, yeah. It's working. Cool, okay. So what I have, wh where I've got my information from is a survey where the usability lab um observed remote control use with um a hundred subjects and then they gave them a questionnaire. Um so it was all about, you know, how people feel about the look and feel of the remote control, you know. What's the most annoying things about remote controls and um the possibility of speech recognition and L_C_D_ screens in remote control. Not that they actually gave me any answers on the L_C_D_ screens, so I should have taken that bit out, but anyway. Um okay, so. What they found is that people don't like how current remote controls are, so you know, definitely you should be looking at something quite different. Um seventy five percent of users find most remote controls ugly. Uh the other twenty five percent have no fashion sense. Uh eighty percent of users would spend more to get um you know, a nice looking remote control. Um current remote controls, they don't match the user behaviour well, as you'll see on the next slide. Um I dunno what zapping is, but Oh, right. But you have that little thing that comes up at the bottom and tells you what's on. Um okay, fifty percent of users say they only use ten percent of the buttons, so that's going back to what, you know, we were saying earlier about, you know, do you need all the buttons on the remote control, they just make it look ugly. Okay? Cool. Um so this is my little graph thing. Mm k Okay, well, I can send it to all of you. What it is is um it's cones, 'cause I thought they'd be more exciting. Um but ooh where's it go? Back. Oh. Oh yes, cool. Okay, I'm gonna stop playing with the little pointy thing. Um okay, so like what it shows is how much things are used relatively and what you can clearly see from that is the thing that's used most is the channel selection. What you can't see is volume selection, it's a little bit higher than all the others. Yeah, so what the graph shows is that, you know, power, channel selection and volume selection are important, and the rest of them, you know, nobody really uses and so that's the the numbers along the top represent their like um their importance, you know, so on a scale of one to ten, how important is that and, you know, channel selection and volume selection are absolutely essential, and the power, well it's not quite so essential, apparently, although I don't understand how it couldn't be, um and everything else, I think, you know, you can forget about having those buttons on the remote control, 'cause they're just not needed, and they're not used. Okay. This is the bit that the email messed up for me and that's what I was fiddling about with at the beginning of the thing. Okay, cool. So um okay, so this is what people find annoying about remote controls. Uh that they get lost, that the uh you know, they're not intuitive and that they're bad for repetitive strain injury. I think if you're watching enough T_V_ to get repetitive strain injury from um you know, watching T_V_, then that's the least of your problems, but you know, it's up there. Um that yeah. Okay, so um I mean the the R_S_I_ thing would be that, like when you have the computer keyboards and you keep your wrists up would be something that encourages you want something with an ergonomic t design that encourages good use of the remote control and you know, not straining your wrists watching T_V_. Yes. Okay, cool. Right, um sorry this is pink because I was copying and pasting the table, and I didn't have time to white it out again. Um okay, but that shows how people whether they would pay more for voice recognition software. So you can see from that that, you know, younger people to the age of thirty five are quite likely to pay quite a lot more f well quite are quite likely to pay more for voice recognition software, whereas as people get older, they're a bit more sceptical about it and they're less willing to to try it. Um so clearly voice recognition is something to think about, but um you know I d I do wonder how well that would work given that a T_V_, you know, tends to be people talking and um, you know, how are you going to stop it from just flipping channels whilst watching T_V_. Um okay? Cool. Um okay, so these are my personal preferences. So you have sleek, stylish, sophisticated, you know, so something that's, you know, a bit cool. Um you know, functional, so it's useful, but minimalist. Um there's a there's an important thing that, you know, people use when, you know, when you're filling up your home, you know, a lot of people fill up their home with bits of crap, basically, you know, and you've got all this stuff, and you're just like, what the hell is that, who is ever gonna use it? You know, so things should either be functional or beautiful or preferably both, so I think we need to aim for both. Um okay, then a long battery life, like you were talking about earlier and um, you know, I was thinking that solar power would be quite cool because, you know, your remote control just sits there, and you could just sit it in the sunshine and save the environment a bit. Um and then like a locator, so you know, kind of like you have for a mobile phone or not a mobile phone Yeah, that's it, you know. I know, it's weird. My flatmate and I were talking about this on the way into uni this morning and I was like I need to get one for everything. So yeah, so maybe something where you clap and then it beeps, something a kind of sound that you don't often hear on the T_V_, you know, 'cause you don't want your remote control beeping every five minutes, 'cause you you'd then deliberately lose it by throwing it out the window or something. So okay? Cool. That's me. Cat's. Ca. Yeah, I mean that's the thing is that it didn't say in the survey, you know, whether, you know, these are the people that will pay more for a more stylish remote control, but I'm assuming, you know, yes. Well, that's when you go to uni, isn't it? So, you know Yeah. Oh, I've unplugged it. Do you want me to Yeah. Seventy six point three percent. Yeah. Yeah, I kn I mean I know what you're saying about the fifteen to twenty five year olds, but I mean it has been proven that that people of that age group have a higher disposable income because they don't have like I mean, you know, if you're at university, you're paying your rent, but you don't have a mortgage, you don't have a life insurance policy, you don't normally have a car, yeah, so. You're still learning to drive actually, so that just costs more than a car, but yeah. Um so I mean like it is an age group to target, really, I think. No, I mean that's what, that's like fifteen Pounds? You know, I think Yeah, I d I don't know many people without a T_V_. We didn't have a T_V_ last year, and everyone thought we were off our heads, you know. Yeah, I d well we've we've got quite a d decent T_V_. Yeah. I think I think the fact that, you know, ninety one point two percent of fifteen to twenty five year olds are saying yes, I would pay more for a voice recognition remote control, does say quite a lot really. You know, so I mean that and the disposable income and I don't think it's something to ignore, you know. Is not a massive difference, you know. No, do totally. You do have it in your mobile phone though, don't you? Because you have like I mean every mobile phone now has like call this person and it calls them. I don't know. Yeah. S so y you'd maybe need a code word. Do you know what I mean? So like when you say change, except that's being said quite a lot on T_V_, so maybe like, you know, remote. I mean how often do people say remote on T_V_? Although I only watch Charmed, so really I wouldn't know but like so you'd just say remote five, you know, remote ten, remote one two nine. I don't think there's a lot of uh voice recognition remote controls. Yeah, that would be another way to do it. Yeah, but then the code word would be even more important, because I mean Sky advertise on every channel, don't they, you know, so then it would be you'd be watching Charmed, and then the Sky advert would come on and it would change to Sky. Yeah, yeah, and that would be really annoying. Yeah. Do you not think that defeats the object of having voice recognition on a remote control though? Yeah, you know, so you have to have the remote control. It's more like if you lost it and it's down the sofa sometime, you can yell at it and it'll just change it, you can look for it later, yeah. Yeah, yeah, I suppose nearer to you but a b like if you have surround sound then Yeah. Yeah, 'cause it's it's quite important that you don't lose the the bit to locate the remote control. Yeah, definitely, yeah. Oh, so y you want our um PowerPoint presentations in there, hey? Okay. There you go. But is everyone's called functional requirements? Okay, so that's good. That's me done. Okay, cool.\r\nSpeaker B: No. Mm. Um um wi on on a what? Oh project project documents, yeah, yeah, yeah, okay. Oh okay, yeah. Yes, I think so. Yeah, the last minute, yeah, yeah. Yeah. Um Okay. Hmm. Mm. Okay, yeah, afterwards, yeah, okay. Thanks. I think we need like some general discussion at the end probably. Yeah. Yeah, I think since since we were discussing some um design issues then I I I would like to continue okay, yeah. Thanks. Oh i Okay, I hope wait. Should it just There's just nothing. Oh right, right, right, um Okay. Nothin okay, something is coming up. No signal? Why? Oh. My my computer went blank now. Adjusting. But I don't see anything I don't see anything on my computer now. This is the problem, but Um. Uh now it's okay. No? No. Oh okay. Okay, that's fine, that's good. Okay, let's start from the beginning. So I'm going to speak about technical functions design uh just like some some first issues that came up. Um 'kay, so the method I was um adopting at this point, it's not um for the for the whole um period of the um all the project but it's just at th at this very moment. Um uh my method was um to look at um other um remote controls, uh so mostly just by searching on the web and to see what um functionality they used. And then um after having got this inspiration and having compared what I found on the web um just to think about what the de what the user really needs and what um what the user might desire as additional uh functionalities. And yeah, and then just to um put the main function of the remote control in in words. Um so the findings uh were um that the main function of the remote control is is just sending messages to the television set, so this quite straightforward. And uh w some of the main functions would be switching on, switching off, uh then the user would like to switch the channel um for example just m changing to the next channel to to flip through all all of the possible channels, or then mm uh the other possibility would be that um she might just want to choose one particular channel, so we would need the numbers. And and also the volume is very important. Um um I als okay. 'Kay. Um um among the findings I found that m m most of the curr mm presently available remote controls also include other mm functionalities um in their design, like operating a V_C_R_, but they don't seem to be able to deal with D_V_D_ players, but then there are surely there are many other functionali functions that could possibly be added to them, but according to the last minute update um actually um we do not want to have all this complicated functions added to our design. So my personal preferences would be uh to keep the mm the whole remote control small um just like the physical size. And then it must be easy to use, so it must follow some conventions um like whereabouts you find the on off button and maybe the colour tends to be red or something. Um then yeah, the must-have buttons would be on off and then the channel numbers and then um the one that allows us to go to the next or the previous channel, and then volume has to be there. But then um other functionalities um could be just uh there could be a menu button and you could change things on the screen then, um for example brightness and mm similar functions could be just um done through the menu. And yeah, the last question I had about whether we wanted to incorporate n uh more functionalities, the answer was already no because of the last minute update. So at the for the time being that's uh that's all. If you have questions Yeah, and also it's it's um other question is uh because there are so many different And there are so many different things that could possibly be included because besides video and D_V_D_ there are the mm um video C_D_s and whatever, so it might be problematic to to choose between all these possible things. Um well, I think the buttons are still mm kind of the most um easy for the user to use, I mean um what other options would you have? A little screen or something, but this would be really kind of I think a lot of learning for the user and and I mean the user just wants to get um get a result um quickly, not to spend time in like um giving several orders um I dunno. I think I th I would I would think the put the buttons, but if if you have other mm proposals um. Yeah. Yeah. Mm-hmm. Yep. Uh am I going in the right direction? No. Wait. Okay, here it comes. Okay, here you are. Um that's very good, very interesting. Mm-hmm. Yeah. Yeah, you share a television or something that yeah. It was seventy something, yeah, yeah. Yeah this this is not unaffordable, but the problem is whether people need it, whether they do have a T_V_ to use its full Yeah. Common, the students yeah, yeah. The s the stu yeah, and the remote control might not yeah, it might not even function with the old T_V_. Yeah, we're still yeah. Or w maybe we can just kind of uh uh Yeah, but at the same time I think maybe we can we can just decide to to have both of these groups as our target, because actually I mean they're all still re young people. Yeah. Yeah. Yeah. Yeah. An Yeah. Yeah. Yeah but uh um Yeah, yeah sure, yeah, yeah. Yeah. Yeah, w well now the v the voice recognition if if it works wonderfully w we could possibly do away with all buttons, but I think this is not really the right moment yet, because people are just so used to buttons and um, yeah it's it's kind of safer, so we we need both, so the voice recognition would be just an extra, it wouldn't really reduce the size of the remote. Yeah but m but on the other hand, remote control isn't as close to you you probably might just just uh speak into it and and the T_V_ would be already further away, so it might not pick up the other things coming from there. Yeah, but then the remote control I think I mean um the idea is kind of it's it's not that it's sitting there on on top of the television, because then you could already yell at the television and you wouldn't you you wouldn't need the remote control, so the remote control is still something you keep n near yourself. Yeah, yeah, yeah. No, but I I I was just defending the the fact why why we want to keep the remote control close to us, a and uh not to yell at it from the distance. Okay. Oh yeah, yeah. Okay, yeah, mm-hmm. The major ones, yeah. Mm-hmm. Mm-hmm. Yeah. Did you find it? It's just yeah, yeah. Oh so so we'll just put them i there, we we yeah, w we won't even okay. Yeah. Yeah. Uh something conceptual, yeah. Hmm. Sorry, but um the next meeting um are we going to have it um right after lunch or shall we prepare our To prepare, okay, yeah, that's good. Okay. Cool. Okay, see you.\r\nSpeaker C: Mm. You said uh targ target groups, what does that mean? Uh okay, 'kay. So are Okay. Alright. I can go first, yeah. Right. Um so f from the Right sure. Uh okay. So n uh with uh with regard to the uh working design of this uh uh remote control uh I've identified um a few basic uh components of the remote and uh se uh from the design, functional design perspective um w I c we can now uh know wha what exactly the components are and how how they work together with each other. So this is the method that uh I'll mostly be following in my um in my uh role. Um the identification of the components, uh and uh since since I'm dealing only with the technical aspects, I would need feedback from the marketing person uh and uh from the user interface person. Uh we'll then integrate this into the product design at a technical level and uh basically update and come up with a new design, so it's a cyclical process. Okay, so these were the basic findings from today. The last three bullets have been integrated from uh the last minute uh email. Uh I just quickly jotted them down. Um so basically uh the as I told you the identification of how the remote control works and what are the various parts to it uh and what are the different processes um and how the parts uh communicate with each other. Um okay, so e the mee email said that teletext is now outdated, so we need to do away with that functionality of the remote control. Um also uh the remote control should be used only for television, because incorporating other features um makes it more comp complex. And the reason why teletext is outdated because uh of internet and uh the availability of internet over television. How however, our our remote control would only be dealing uh with the the use for television, in order to keep things simple. Um also the management wants that um our design should be unique uh it so it should incorporate um colour and the slogan uh that our company um has it as its standard. Okay, so he he here is a functional overview of the remote control. Um there's basically an energy source at the heart uh which feeds into the chip and the user interface. The user interf interface communicates with the chip, so I'll basic go over to the Okay. So if uh if this is our energy source and this is a cell, uh it communicates uh it feeds energy into the into the chip, which basically finds out h uh how how to do everything. There is a user interface here. So whe when the user presses a button, it feeds into the chip and the chip then generates a response and takes the response to an infrared terminal, um which then so the output of the chip is an infrared bit code, which is then communicated to the remote site, which h has an infrared receiver. Um the there can be uh a bulb here or something to indicate whether the remote is on or communicating. Um so these are the essent so a all the functionality of the remote control, whatever new functions that we need to do, um make the chip more complicated uh and bigger, basically. Okay. Um so i in my personal preferences um I'm hoping that we can ke keep the design as simple and clear as possible. This would uh help us uh to upgrade our technology at a future point of time. And uh also if we can incorporate uh the latest features in our chip design, so that our um uh remote control does not become outdated soon and it's compatible with mot most uh televisions. That's about it. So anything that you would like to know or No, I don't have any idea about what each component costs. Um yeah. Anything else? Yeah. Certainly, yeah. So so tha yeah, we definitely need to operate within our constraints, but um unfortunately I I do not have any data, so uh I just identified the functional components for that. Yeah, okay. Yeah. Mm 'kay. I it'll take some time. Oh, there it is, yeah. It'll come up, it um uh no signal. Yeah yeah, it says something now, adjusting Okay. Oh, that's strange. Okay. And one more time. Mm. Sorry, cou could you go back for a second? Uh switching on off channel, uh volume, okay, that's great. So in the u user interface requirements uh uh uh we we have been able to identify what are the basic buttons that we do want. Um but um so so at this stage, uh how we go about implementing those button we will not identify or I mean in we can completely do away with buttons and uh have some kind of a fancy user interface or something like that. But uh is is there any uh uh any thoughts on that? Right. Yeah, and it'll make the costs yeah. Right. Uh I think the co costs will also play a big role when we come to know about them. So well we can probably wait until t we have more knowledge on that. Uh i if the if the costs allow, we can have like an L_C_D_ display and uh with um because we do want something fancy and fashionable as well. So yeah? Cool. try to press oh, okay, yep. Mm. Right. Mm-hmm. Mm. Right. Mm-hmm. Hmm. Right. Mm. Mm. Mm. Some kind of a ring, some Right. Hmm. Okay, that's great, thanks. Mm. I think one of the very interesting things that came up in um uh Ka Kate Cat Cat's uh presentation was um uh this this issue of uh uh like voice recognition being more popular with uh younger people. So if we need to have a target group um then uh I think as far as the m motto of our company is concerned, if we want to have something sleek and uh you know, good looking uh we are better off targeting a younger audience then um you know, people who are comparatively elderly. Um. Right. Right. Bu but but the survey did say that f things like voice recognition are more popular with them, so if you want to put in something stylish, then uh th it'll certainly be more popular with this i ye with the younger people as compared to older people, yeah. Right, and Right. Mm. Right. But uh still, if if you can go back to that slide and uh, how popular was it? Oh, oh, okay. That's alright, if you can just look it up on your computer, wh uh um people between twenty five to thirty five, uh how popular was so it was sti still still quite popular amongst them. So even they are seventy six percent, is that high amount? Alright. Yeah. So you're more likely to b Yeah. Yeah. Mm. Bu but even even in the case of twenty five to thirty five it's quite popular, right? So mm uh are are are Mm. Mm. Um I was having a a general outlook on um m most like sophisticated features, but voice recognition itself I'm not very sure about, because one of the p uh things that Cat pointed out was uh uh how do we go about implementing it? Uh and uh Yeah. But how frequently do we use it anyway and um uh h ho how good is it, you know uh voice recognition softwares are still quite uh Yeah. Right. Right. Okay. O Right. Mm. Right. Yeah. Okay, so it seems like a feasible thing to implement uh for for a limited yeah. Mm. W What uh Mm. What wh uh what I was thinking is that there is this uh separation between what the channels are on T_V_ and how they are numbered on the remote control. If we can do with away with that, our product can be really popular uh in the sense that uh a person can say, I want to watch uh I_T_V_ one instead of saying that I want to go onto channel number forty five. Yeah, so if uh if something like that can be incorporated, some kind of Mm-hmm. Alright. Yeah, that's Right. Mm. Mm yeah and it might become very difficult from a distance for the television to understand what you're saying because of the noise factor for the remote control being cl I mean it'll it'll mm. Yeah. Mm. So uh wh another thing uh that can be used is that uh there can be a beeper button on the T_V_, so you can go and press that button and um and the remote control, wherever it is, it'll beep, so we we can probably come to know where it is. Right, yeah, yeah, yeah. Alright, yeah. Right. Okay. So where exactly is this i Ah, okay. Yeah. Yeah, yeah in that one, right yeah. No. Right. I guess I'll find out. Wha what was it again that I was supposed to look into? Con components, oh.\r\nSpeaker D: All hooked up. Okay, so now we are here at the functional design meeting. Um hopefully this meeting I'll be doing a little bit less talking than I did last time 'cause this is when you get to show us what you've been doing individually. The agenda for the meeting, I put it in the sh shared documents folder. I don't know if that meant that you could see it or not. Did anyone? No. Oh well. Um I'll try and do that for the next meeting as well so if you check in there, there's a shared project documents folder. Um and it should be in there. Project documents, yeah. So I'll put it in there. Is it best if I send you an email maybe, to let you know it's there? Yep. I'll do that next time. Um I'll act as secretary for this meeting and just take minutes as we go through, and then I'll send them to you after the meeting. The main the main focus of this meeting is your presentations that you've been preparing during the time, so we'll go through each of you one by one. Um then we need to briefly discuss the new project requirements that were sent to us. I just sent at the last minute, I'm sorry about that, but we can see how that affects what you were you were doing. Um and then we need to, by the end of the meeting come to some kind of decision on who our target group's going to be and what the functions of the remote control that's the the main goal is to come up with those two things, target group and functions of the remote control. And we've got forty minutes to do that in. So I would say yeah? As uh who it is that we're going to be trying to sell this thing to, yeah. So we need to yeah, we need to have a fairly defined group that that we want to focus on and then look at the functions um of the dem remote control itself. So with that I think it's best if I hand over to you. Does anyone have a preference for going first? You wanna go first? Okay, so we need to unplug my laptop and plug in yours. I assume we just pull it out? Just before you start, to make it easier, would you three mind emailing me your presentations? Once we you don't have to do it now but when once you go back, just so that I don't have to scribble everything down. Hmm. Mm-hmm. Okay. Do you have any um i idea about costs at this point? Br Okay. 'Cause that's something to consider, I guess, if we're if we're using more advanced technology, it might increase the price. Yeah. That's fine. Are there any more questions, or shall we just skip straight to the next one and then we can discuss all of them together at the end? Yeah, I think that will do. Okay, so do you want to Yes, shall shall we pull this up? I think that has to come out of there. Yeah. Yeah, I thought those last minute things, they're gonna hit you the worst. It ta takes a little Oh, and have you you need to then also press on yours, function F_ eight, so the blue function key at the bottom and F_ eight. Now it's coming, computer no signal. Maybe again? Okay, adjusting. There we go, there we go. Oh, if you press if you press function and that again there's there's usually three modes, one where it's only here, one where it's only there, and one where it's both. Okay, so one more time. Should yeah just wait for a moment, adjusting. Okay. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Yeah. If I mean that was the the directive that came through from management, but if we had a a decent case for that we really think it's important to include video and D_V_D_, I could get back to them and see. It's w it's just whether it's worth arguing about. Mm-hmm. Yeah. Mm-hmm. Okay. Are there any questions for clarification of Maarika before we go on to the next one? Mm-hmm. Mm. Mm. Mm-hmm. Sure, we can discuss that maybe after the next one. Do you want to yeah. Oh, I'm getting hungry. You set? Uh we need to do the function key thing so that it comes up on here. Hello. Is it plugged in prop it's working? Okay. Excellent. It's um switching between channels, sort of randomly going through. Mm. Ooh, that's a bit difficult to see. If you explain it to us it'll be fine. Yeah. I liked the, I liked the litt ooh come back. No. Okay. Mm-hmm, that's the next one along, yeah? Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. The remote control. Mm-hmm. That's alright. Mm. Keys and things like that, yeah. Whistle and it screams at you, yeah. Mm-hmm. That's you, excellent. Um. I'm just gonna tick yes. So, we've got about ten, fifteen minutes to discuss Mm-hmm. Yeah. Mm-hmm. Yeah. Then again I guess the th where it was most popular was the fifteen to twenty five bracket and the I don't know how often they're buying televisions. Yeah, but you don't have much money, generally. I would've thought it's it's more that twenty five to thirty five, when people are really moving out and they've got their first job and they want their nice toys and O oh it's on sorry, we unplugged it. Here, let me Yeah. Mm-hmm. Yeah. Yeah, they've got no commitments and usually not a car and all of those things. Kids. Yeah. Yeah, and if we're if we're talking twenty five Euros as a price, that's not unaffordable, even for young people. Yeah. Yeah. But do they But the T_V_s are often kind of someone's old T_V_ that's blah blah and be a bit strange to have a fancy rome remote. Mm. Yeah. Yeah. Yeah. Yeah. Yeah, if we ta if we take fifteen to thirty five, but that then does imply that we should try and incorporate voice recognition. Is that gonna have a an implication for the technical specs? Mm-hmm. Yeah. Yeah. With um but with a T_V_ remote it's gonna be quite limited if we're t saying the main things people want to do is on off channel five, louder, tha that should be relatively simple. Mm. Yeah. Mm-hmm. Yeah, but maybe if you wanna look into that just to just to check. Um, so if we go for the the fifteen to thirty five age group and then of course we're going to get th anyone who's older than thirty five who wants to look young and hip and trendy and has the money, then they'll they'll still go for the same advertising. Yeah, I think we need both. Yeah. Mm. Uh-huh. Uh-huh. So that if that was in the the voice recognition, that would be great. Yeah. Yeah. Watch Sky and yeah. Mm-hmm. But that's definitely a possibility. Yeah. So that you can yell at it, yeah. Yeah. Alright. Mm. Yeah. Yeah. Yeah. Yeah. Mm-hmm. That's but then if you're buying the remote separately, but y you could have something, but i if it was something that you could like stick onto the T_V_ or something, some like a two p if you bought it in a two part pack, so one part attaches to the T_V_. The l Well that's right, but it solves the problem of having different noises. Yeah. Okay, I think we're gonna have to wrap this up um. But if we go away with that that kind of general um specification in mind that we're looking at fifteen to thirty five year olds, we want it to look simple, but still have the buttons so it's easy to use, but only those key buttons, the major buttons and then one sort of menu one, and then voice recognition included as an option um but that obviously needs a little bit more working out as to whether it's really feasible and some of those problems we were mentioning um. What we have to do now is to go back to our little places, complete our questionnaire and some sort of summarisation, which y you'll get immediately by email. Send me your presentations so that I can use them to make the minutes, and then we've got a lunch break and after lunch we go back to our own little stations and have thirty minutes more work. Um I'll put the minutes in that project documents folder, but I'll send you an email when I do it, so that you know. It should be on your desktop, so on the yeah. So I'll put it I'll put them there as soon as I've written them. Yeah, and email them round. Yeah, that would be great. Oh yeah, put them in there. Yeah, then you don't have to email them. No, they're all called something slightly different. Technical requirements and something something, yeah. So, if you put them in there, we'll all be able to see them and refer to them if we need to. Um as to where we're going from here, you're going to look at the components concept. Yeah? Whatever that means. Yeah. You'll be looking you'll be looking at the user interface concept, on something conceptual and you're watching trends to see how we go and surely voice recognition'll fall off the map or something that um we'll keep keep our options op hmm? Components, yeah. No, we have we have after lunch we have thirty minutes to ourselves to prepare, so that's fine, w before lunch we just have to complete the questionnaire and some sort of summary. Okay? Right on time. Okay, so you can I guess we'll see you for lunch in a sec?"} ### Data Fields - dialogue: text of dialogue. - summary: human written summary of the dialogue. - id: unique file id of an example. ### Data Splits - train: 209 - val: 42 - test: 28 ## Dataset Creation ### Curation Rationale Refer Above. ### Who are the source language producers? linguists ### Who are the annotators? language experts ## Licensing Information non-commercial licence: cc-by-4.0 ## Contributions Thanks to Carletta for adding this dataset.
[ "# Dataset Card for AMI Corpus", "## Dataset Description", "### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nThe AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.", "#### Synchronised recording devices:\nclose-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, individual pens.", "#### Annotation:\northographic transcription, annotations for many different phenomena (dialog acts, head movement etc. ).\n \n\nAlthough the AMI Meeting Corpus was created for the uses of a consortium that is developing meeting browsing technology, it is designed to be useful for a wide range of research areas. The downloads on this website include videos that are suitable for most purposes, but higher resolution videos are available for researchers engaged in video processing. \n \n\nAll of the signals and transcription, and some of the annotations, have been released publicly under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0).", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nAMI Corpus is a meeting summarization dataset, consisting of 279 dialogues split into train, test and validation.\nThe first instance in the training set:\n{'id': '30', 'summary': \"The project manager opens the meeting by stating that they will address functional design and then going over the agenda. The industrial designer gives his presentation, explaining how remote controls function and giving personal preference to a clear, simple design that upgrades the technology as well as incorporates the latest features in chip design. The interface specialist gives her presentation next, addressing the main purpose of a remote control. She pinpoints the main functions of on/off, channel-switching, numbers for choosing particular channels, and volume; and also suggests adding a menu button to change settings such as brightness on the screen. She gives preference to a remote that is small, easy to use, and follows some conventions. The group briefly discusses the possibility of using an LCD screen if cost allows it, since it is fancy and fashionable. The marketing expert presents, giving statistical information from a survey of 100 subjects. She prefers a remote that is sleek, stylish, sophisticated, cool, beautiful, functional, solar-powered, has long battery life, and has a locator. They discuss the target group, deciding it should be 15-35 year olds. After they talk about features they might include, the project manager closes the meeting by allocating tasks.\", 'dialogue': \"Speaker A: Cool. Do you wanna give me the little cable thing? Yeah. Cool. Ah, that's why it won't meet. Okay, cool. Yep, cool. Okay, functional requirements. Alright, yeah. It's working. Cool, okay. So what I have, wh where I've got my information from is a survey where the usability lab um observed remote control use with um a hundred subjects and then they gave them a questionnaire. Um so it was all about, you know, how people feel about the look and feel of the remote control, you know. What's the most annoying things about remote controls and um the possibility of speech recognition and L_C_D_ screens in remote control. Not that they actually gave me any answers on the L_C_D_ screens, so I should have taken that bit out, but anyway. Um okay, so. What they found is that people don't like how current remote controls are, so you know, definitely you should be looking at something quite different. Um seventy five percent of users find most remote controls ugly. Uh the other twenty five percent have no fashion sense. Uh eighty percent of users would spend more to get um you know, a nice looking remote control. Um current remote controls, they don't match the user behaviour well, as you'll see on the next slide. Um I dunno what zapping is, but Oh, right. But you have that little thing that comes up at the bottom and tells you what's on. Um okay, fifty percent of users say they only use ten percent of the buttons, so that's going back to what, you know, we were saying earlier about, you know, do you need all the buttons on the remote control, they just make it look ugly. Okay? Cool. Um so this is my little graph thing. Mm k Okay, well, I can send it to all of you. What it is is um it's cones, 'cause I thought they'd be more exciting. Um but ooh where's it go? Back. Oh. Oh yes, cool. Okay, I'm gonna stop playing with the little pointy thing. Um okay, so like what it shows is how much things are used relatively and what you can clearly see from that is the thing that's used most is the channel selection. What you can't see is volume selection, it's a little bit higher than all the others. Yeah, so what the graph shows is that, you know, power, channel selection and volume selection are important, and the rest of them, you know, nobody really uses and so that's the the numbers along the top represent their like um their importance, you know, so on a scale of one to ten, how important is that and, you know, channel selection and volume selection are absolutely essential, and the power, well it's not quite so essential, apparently, although I don't understand how it couldn't be, um and everything else, I think, you know, you can forget about having those buttons on the remote control, 'cause they're just not needed, and they're not used. Okay. This is the bit that the email messed up for me and that's what I was fiddling about with at the beginning of the thing. Okay, cool. So um okay, so this is what people find annoying about remote controls. Uh that they get lost, that the uh you know, they're not intuitive and that they're bad for repetitive strain injury. I think if you're watching enough T_V_ to get repetitive strain injury from um you know, watching T_V_, then that's the least of your problems, but you know, it's up there. Um that yeah. Okay, so um I mean the the R_S_I_ thing would be that, like when you have the computer keyboards and you keep your wrists up would be something that encourages you want something with an ergonomic t design that encourages good use of the remote control and you know, not straining your wrists watching T_V_. Yes. Okay, cool. Right, um sorry this is pink because I was copying and pasting the table, and I didn't have time to white it out again. Um okay, but that shows how people whether they would pay more for voice recognition software. So you can see from that that, you know, younger people to the age of thirty five are quite likely to pay quite a lot more f well quite are quite likely to pay more for voice recognition software, whereas as people get older, they're a bit more sceptical about it and they're less willing to to try it. Um so clearly voice recognition is something to think about, but um you know I d I do wonder how well that would work given that a T_V_, you know, tends to be people talking and um, you know, how are you going to stop it from just flipping channels whilst watching T_V_. Um okay? Cool. Um okay, so these are my personal preferences. So you have sleek, stylish, sophisticated, you know, so something that's, you know, a bit cool. Um you know, functional, so it's useful, but minimalist. Um there's a there's an important thing that, you know, people use when, you know, when you're filling up your home, you know, a lot of people fill up their home with bits of crap, basically, you know, and you've got all this stuff, and you're just like, what the hell is that, who is ever gonna use it? You know, so things should either be functional or beautiful or preferably both, so I think we need to aim for both. Um okay, then a long battery life, like you were talking about earlier and um, you know, I was thinking that solar power would be quite cool because, you know, your remote control just sits there, and you could just sit it in the sunshine and save the environment a bit. Um and then like a locator, so you know, kind of like you have for a mobile phone or not a mobile phone Yeah, that's it, you know. I know, it's weird. My flatmate and I were talking about this on the way into uni this morning and I was like I need to get one for everything. So yeah, so maybe something where you clap and then it beeps, something a kind of sound that you don't often hear on the T_V_, you know, 'cause you don't want your remote control beeping every five minutes, 'cause you you'd then deliberately lose it by throwing it out the window or something. So okay? Cool. That's me. Cat's. Ca. Yeah, I mean that's the thing is that it didn't say in the survey, you know, whether, you know, these are the people that will pay more for a more stylish remote control, but I'm assuming, you know, yes. Well, that's when you go to uni, isn't it? So, you know Yeah. Oh, I've unplugged it. Do you want me to Yeah. Seventy six point three percent. Yeah. Yeah, I kn I mean I know what you're saying about the fifteen to twenty five year olds, but I mean it has been proven that that people of that age group have a higher disposable income because they don't have like I mean, you know, if you're at university, you're paying your rent, but you don't have a mortgage, you don't have a life insurance policy, you don't normally have a car, yeah, so. You're still learning to drive actually, so that just costs more than a car, but yeah. Um so I mean like it is an age group to target, really, I think. No, I mean that's what, that's like fifteen Pounds? You know, I think Yeah, I d I don't know many people without a T_V_. We didn't have a T_V_ last year, and everyone thought we were off our heads, you know. Yeah, I d well we've we've got quite a d decent T_V_. Yeah. I think I think the fact that, you know, ninety one point two percent of fifteen to twenty five year olds are saying yes, I would pay more for a voice recognition remote control, does say quite a lot really. You know, so I mean that and the disposable income and I don't think it's something to ignore, you know. Is not a massive difference, you know. No, do totally. You do have it in your mobile phone though, don't you? Because you have like I mean every mobile phone now has like call this person and it calls them. I don't know. Yeah. S so y you'd maybe need a code word. Do you know what I mean? So like when you say change, except that's being said quite a lot on T_V_, so maybe like, you know, remote. I mean how often do people say remote on T_V_? Although I only watch Charmed, so really I wouldn't know but like so you'd just say remote five, you know, remote ten, remote one two nine. I don't think there's a lot of uh voice recognition remote controls. Yeah, that would be another way to do it. Yeah, but then the code word would be even more important, because I mean Sky advertise on every channel, don't they, you know, so then it would be you'd be watching Charmed, and then the Sky advert would come on and it would change to Sky. Yeah, yeah, and that would be really annoying. Yeah. Do you not think that defeats the object of having voice recognition on a remote control though? Yeah, you know, so you have to have the remote control. It's more like if you lost it and it's down the sofa sometime, you can yell at it and it'll just change it, you can look for it later, yeah. Yeah, yeah, I suppose nearer to you but a b like if you have surround sound then Yeah. Yeah, 'cause it's it's quite important that you don't lose the the bit to locate the remote control. Yeah, definitely, yeah. Oh, so y you want our um PowerPoint presentations in there, hey? Okay. There you go. But is everyone's called functional requirements? Okay, so that's good. That's me done. Okay, cool.\\r\\nSpeaker B: No. Mm. Um um wi on on a what? Oh project project documents, yeah, yeah, yeah, okay. Oh okay, yeah. Yes, I think so. Yeah, the last minute, yeah, yeah. Yeah. Um Okay. Hmm. Mm. Okay, yeah, afterwards, yeah, okay. Thanks. I think we need like some general discussion at the end probably. Yeah. Yeah, I think since since we were discussing some um design issues then I I I would like to continue okay, yeah. Thanks. Oh i Okay, I hope wait. Should it just There's just nothing. Oh right, right, right, um Okay. Nothin okay, something is coming up. No signal? Why? Oh. My my computer went blank now. Adjusting. But I don't see anything I don't see anything on my computer now. This is the problem, but Um. Uh now it's okay. No? No. Oh okay. Okay, that's fine, that's good. Okay, let's start from the beginning. So I'm going to speak about technical functions design uh just like some some first issues that came up. Um 'kay, so the method I was um adopting at this point, it's not um for the for the whole um period of the um all the project but it's just at th at this very moment. Um uh my method was um to look at um other um remote controls, uh so mostly just by searching on the web and to see what um functionality they used. And then um after having got this inspiration and having compared what I found on the web um just to think about what the de what the user really needs and what um what the user might desire as additional uh functionalities. And yeah, and then just to um put the main function of the remote control in in words. Um so the findings uh were um that the main function of the remote control is is just sending messages to the television set, so this quite straightforward. And uh w some of the main functions would be switching on, switching off, uh then the user would like to switch the channel um for example just m changing to the next channel to to flip through all all of the possible channels, or then mm uh the other possibility would be that um she might just want to choose one particular channel, so we would need the numbers. And and also the volume is very important. Um um I als okay. 'Kay. Um um among the findings I found that m m most of the curr mm presently available remote controls also include other mm functionalities um in their design, like operating a V_C_R_, but they don't seem to be able to deal with D_V_D_ players, but then there are surely there are many other functionali functions that could possibly be added to them, but according to the last minute update um actually um we do not want to have all this complicated functions added to our design. So my personal preferences would be uh to keep the mm the whole remote control small um just like the physical size. And then it must be easy to use, so it must follow some conventions um like whereabouts you find the on off button and maybe the colour tends to be red or something. Um then yeah, the must-have buttons would be on off and then the channel numbers and then um the one that allows us to go to the next or the previous channel, and then volume has to be there. But then um other functionalities um could be just uh there could be a menu button and you could change things on the screen then, um for example brightness and mm similar functions could be just um done through the menu. And yeah, the last question I had about whether we wanted to incorporate n uh more functionalities, the answer was already no because of the last minute update. So at the for the time being that's uh that's all. If you have questions Yeah, and also it's it's um other question is uh because there are so many different And there are so many different things that could possibly be included because besides video and D_V_D_ there are the mm um video C_D_s and whatever, so it might be problematic to to choose between all these possible things. Um well, I think the buttons are still mm kind of the most um easy for the user to use, I mean um what other options would you have? A little screen or something, but this would be really kind of I think a lot of learning for the user and and I mean the user just wants to get um get a result um quickly, not to spend time in like um giving several orders um I dunno. I think I th I would I would think the put the buttons, but if if you have other mm proposals um. Yeah. Yeah. Mm-hmm. Yep. Uh am I going in the right direction? No. Wait. Okay, here it comes. Okay, here you are. Um that's very good, very interesting. Mm-hmm. Yeah. Yeah, you share a television or something that yeah. It was seventy something, yeah, yeah. Yeah this this is not unaffordable, but the problem is whether people need it, whether they do have a T_V_ to use its full Yeah. Common, the students yeah, yeah. The s the stu yeah, and the remote control might not yeah, it might not even function with the old T_V_. Yeah, we're still yeah. Or w maybe we can just kind of uh uh Yeah, but at the same time I think maybe we can we can just decide to to have both of these groups as our target, because actually I mean they're all still re young people. Yeah. Yeah. Yeah. Yeah. An Yeah. Yeah. Yeah but uh um Yeah, yeah sure, yeah, yeah. Yeah. Yeah, w well now the v the voice recognition if if it works wonderfully w we could possibly do away with all buttons, but I think this is not really the right moment yet, because people are just so used to buttons and um, yeah it's it's kind of safer, so we we need both, so the voice recognition would be just an extra, it wouldn't really reduce the size of the remote. Yeah but m but on the other hand, remote control isn't as close to you you probably might just just uh speak into it and and the T_V_ would be already further away, so it might not pick up the other things coming from there. Yeah, but then the remote control I think I mean um the idea is kind of it's it's not that it's sitting there on on top of the television, because then you could already yell at the television and you wouldn't you you wouldn't need the remote control, so the remote control is still something you keep n near yourself. Yeah, yeah, yeah. No, but I I I was just defending the the fact why why we want to keep the remote control close to us, a and uh not to yell at it from the distance. Okay. Oh yeah, yeah. Okay, yeah, mm-hmm. The major ones, yeah. Mm-hmm. Mm-hmm. Yeah. Did you find it? It's just yeah, yeah. Oh so so we'll just put them i there, we we yeah, w we won't even okay. Yeah. Yeah. Uh something conceptual, yeah. Hmm. Sorry, but um the next meeting um are we going to have it um right after lunch or shall we prepare our To prepare, okay, yeah, that's good. Okay. Cool. Okay, see you.\\r\\nSpeaker C: Mm. You said uh targ target groups, what does that mean? Uh okay, 'kay. So are Okay. Alright. I can go first, yeah. Right. Um so f from the Right sure. Uh okay. So n uh with uh with regard to the uh working design of this uh uh remote control uh I've identified um a few basic uh components of the remote and uh se uh from the design, functional design perspective um w I c we can now uh know wha what exactly the components are and how how they work together with each other. So this is the method that uh I'll mostly be following in my um in my uh role. Um the identification of the components, uh and uh since since I'm dealing only with the technical aspects, I would need feedback from the marketing person uh and uh from the user interface person. Uh we'll then integrate this into the product design at a technical level and uh basically update and come up with a new design, so it's a cyclical process. Okay, so these were the basic findings from today. The last three bullets have been integrated from uh the last minute uh email. Uh I just quickly jotted them down. Um so basically uh the as I told you the identification of how the remote control works and what are the various parts to it uh and what are the different processes um and how the parts uh communicate with each other. Um okay, so e the mee email said that teletext is now outdated, so we need to do away with that functionality of the remote control. Um also uh the remote control should be used only for television, because incorporating other features um makes it more comp complex. And the reason why teletext is outdated because uh of internet and uh the availability of internet over television. How however, our our remote control would only be dealing uh with the the use for television, in order to keep things simple. Um also the management wants that um our design should be unique uh it so it should incorporate um colour and the slogan uh that our company um has it as its standard. Okay, so he he here is a functional overview of the remote control. Um there's basically an energy source at the heart uh which feeds into the chip and the user interface. The user interf interface communicates with the chip, so I'll basic go over to the Okay. So if uh if this is our energy source and this is a cell, uh it communicates uh it feeds energy into the into the chip, which basically finds out h uh how how to do everything. There is a user interface here. So whe when the user presses a button, it feeds into the chip and the chip then generates a response and takes the response to an infrared terminal, um which then so the output of the chip is an infrared bit code, which is then communicated to the remote site, which h has an infrared receiver. Um the there can be uh a bulb here or something to indicate whether the remote is on or communicating. Um so these are the essent so a all the functionality of the remote control, whatever new functions that we need to do, um make the chip more complicated uh and bigger, basically. Okay. Um so i in my personal preferences um I'm hoping that we can ke keep the design as simple and clear as possible. This would uh help us uh to upgrade our technology at a future point of time. And uh also if we can incorporate uh the latest features in our chip design, so that our um uh remote control does not become outdated soon and it's compatible with mot most uh televisions. That's about it. So anything that you would like to know or No, I don't have any idea about what each component costs. Um yeah. Anything else? Yeah. Certainly, yeah. So so tha yeah, we definitely need to operate within our constraints, but um unfortunately I I do not have any data, so uh I just identified the functional components for that. Yeah, okay. Yeah. Mm 'kay. I it'll take some time. Oh, there it is, yeah. It'll come up, it um uh no signal. Yeah yeah, it says something now, adjusting Okay. Oh, that's strange. Okay. And one more time. Mm. Sorry, cou could you go back for a second? Uh switching on off channel, uh volume, okay, that's great. So in the u user interface requirements uh uh uh we we have been able to identify what are the basic buttons that we do want. Um but um so so at this stage, uh how we go about implementing those button we will not identify or I mean in we can completely do away with buttons and uh have some kind of a fancy user interface or something like that. But uh is is there any uh uh any thoughts on that? Right. Yeah, and it'll make the costs yeah. Right. Uh I think the co costs will also play a big role when we come to know about them. So well we can probably wait until t we have more knowledge on that. Uh i if the if the costs allow, we can have like an L_C_D_ display and uh with um because we do want something fancy and fashionable as well. So yeah? Cool. try to press oh, okay, yep. Mm. Right. Mm-hmm. Mm. Right. Mm-hmm. Hmm. Right. Mm. Mm. Mm. Some kind of a ring, some Right. Hmm. Okay, that's great, thanks. Mm. I think one of the very interesting things that came up in um uh Ka Kate Cat Cat's uh presentation was um uh this this issue of uh uh like voice recognition being more popular with uh younger people. So if we need to have a target group um then uh I think as far as the m motto of our company is concerned, if we want to have something sleek and uh you know, good looking uh we are better off targeting a younger audience then um you know, people who are comparatively elderly. Um. Right. Right. Bu but but the survey did say that f things like voice recognition are more popular with them, so if you want to put in something stylish, then uh th it'll certainly be more popular with this i ye with the younger people as compared to older people, yeah. Right, and Right. Mm. Right. But uh still, if if you can go back to that slide and uh, how popular was it? Oh, oh, okay. That's alright, if you can just look it up on your computer, wh uh um people between twenty five to thirty five, uh how popular was so it was sti still still quite popular amongst them. So even they are seventy six percent, is that high amount? Alright. Yeah. So you're more likely to b Yeah. Yeah. Mm. Bu but even even in the case of twenty five to thirty five it's quite popular, right? So mm uh are are are Mm. Mm. Um I was having a a general outlook on um m most like sophisticated features, but voice recognition itself I'm not very sure about, because one of the p uh things that Cat pointed out was uh uh how do we go about implementing it? Uh and uh Yeah. But how frequently do we use it anyway and um uh h ho how good is it, you know uh voice recognition softwares are still quite uh Yeah. Right. Right. Okay. O Right. Mm. Right. Yeah. Okay, so it seems like a feasible thing to implement uh for for a limited yeah. Mm. W What uh Mm. What wh uh what I was thinking is that there is this uh separation between what the channels are on T_V_ and how they are numbered on the remote control. If we can do with away with that, our product can be really popular uh in the sense that uh a person can say, I want to watch uh I_T_V_ one instead of saying that I want to go onto channel number forty five. Yeah, so if uh if something like that can be incorporated, some kind of Mm-hmm. Alright. Yeah, that's Right. Mm. Mm yeah and it might become very difficult from a distance for the television to understand what you're saying because of the noise factor for the remote control being cl I mean it'll it'll mm. Yeah. Mm. So uh wh another thing uh that can be used is that uh there can be a beeper button on the T_V_, so you can go and press that button and um and the remote control, wherever it is, it'll beep, so we we can probably come to know where it is. Right, yeah, yeah, yeah. Alright, yeah. Right. Okay. So where exactly is this i Ah, okay. Yeah. Yeah, yeah in that one, right yeah. No. Right. I guess I'll find out. Wha what was it again that I was supposed to look into? Con components, oh.\\r\\nSpeaker D: All hooked up. Okay, so now we are here at the functional design meeting. Um hopefully this meeting I'll be doing a little bit less talking than I did last time 'cause this is when you get to show us what you've been doing individually. The agenda for the meeting, I put it in the sh shared documents folder. I don't know if that meant that you could see it or not. Did anyone? No. Oh well. Um I'll try and do that for the next meeting as well so if you check in there, there's a shared project documents folder. Um and it should be in there. Project documents, yeah. So I'll put it in there. Is it best if I send you an email maybe, to let you know it's there? Yep. I'll do that next time. Um I'll act as secretary for this meeting and just take minutes as we go through, and then I'll send them to you after the meeting. The main the main focus of this meeting is your presentations that you've been preparing during the time, so we'll go through each of you one by one. Um then we need to briefly discuss the new project requirements that were sent to us. I just sent at the last minute, I'm sorry about that, but we can see how that affects what you were you were doing. Um and then we need to, by the end of the meeting come to some kind of decision on who our target group's going to be and what the functions of the remote control that's the the main goal is to come up with those two things, target group and functions of the remote control. And we've got forty minutes to do that in. So I would say yeah? As uh who it is that we're going to be trying to sell this thing to, yeah. So we need to yeah, we need to have a fairly defined group that that we want to focus on and then look at the functions um of the dem remote control itself. So with that I think it's best if I hand over to you. Does anyone have a preference for going first? You wanna go first? Okay, so we need to unplug my laptop and plug in yours. I assume we just pull it out? Just before you start, to make it easier, would you three mind emailing me your presentations? Once we you don't have to do it now but when once you go back, just so that I don't have to scribble everything down. Hmm. Mm-hmm. Okay. Do you have any um i idea about costs at this point? Br Okay. 'Cause that's something to consider, I guess, if we're if we're using more advanced technology, it might increase the price. Yeah. That's fine. Are there any more questions, or shall we just skip straight to the next one and then we can discuss all of them together at the end? Yeah, I think that will do. Okay, so do you want to Yes, shall shall we pull this up? I think that has to come out of there. Yeah. Yeah, I thought those last minute things, they're gonna hit you the worst. It ta takes a little Oh, and have you you need to then also press on yours, function F_ eight, so the blue function key at the bottom and F_ eight. Now it's coming, computer no signal. Maybe again? Okay, adjusting. There we go, there we go. Oh, if you press if you press function and that again there's there's usually three modes, one where it's only here, one where it's only there, and one where it's both. Okay, so one more time. Should yeah just wait for a moment, adjusting. Okay. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Yeah. If I mean that was the the directive that came through from management, but if we had a a decent case for that we really think it's important to include video and D_V_D_, I could get back to them and see. It's w it's just whether it's worth arguing about. Mm-hmm. Yeah. Mm-hmm. Okay. Are there any questions for clarification of Maarika before we go on to the next one? Mm-hmm. Mm. Mm. Mm-hmm. Sure, we can discuss that maybe after the next one. Do you want to yeah. Oh, I'm getting hungry. You set? Uh we need to do the function key thing so that it comes up on here. Hello. Is it plugged in prop it's working? Okay. Excellent. It's um switching between channels, sort of randomly going through. Mm. Ooh, that's a bit difficult to see. If you explain it to us it'll be fine. Yeah. I liked the, I liked the litt ooh come back. No. Okay. Mm-hmm, that's the next one along, yeah? Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. The remote control. Mm-hmm. That's alright. Mm. Keys and things like that, yeah. Whistle and it screams at you, yeah. Mm-hmm. That's you, excellent. Um. I'm just gonna tick yes. So, we've got about ten, fifteen minutes to discuss Mm-hmm. Yeah. Mm-hmm. Yeah. Then again I guess the th where it was most popular was the fifteen to twenty five bracket and the I don't know how often they're buying televisions. Yeah, but you don't have much money, generally. I would've thought it's it's more that twenty five to thirty five, when people are really moving out and they've got their first job and they want their nice toys and O oh it's on sorry, we unplugged it. Here, let me Yeah. Mm-hmm. Yeah. Yeah, they've got no commitments and usually not a car and all of those things. Kids. Yeah. Yeah, and if we're if we're talking twenty five Euros as a price, that's not unaffordable, even for young people. Yeah. Yeah. But do they But the T_V_s are often kind of someone's old T_V_ that's blah blah and be a bit strange to have a fancy rome remote. Mm. Yeah. Yeah. Yeah. Yeah. Yeah, if we ta if we take fifteen to thirty five, but that then does imply that we should try and incorporate voice recognition. Is that gonna have a an implication for the technical specs? Mm-hmm. Yeah. Yeah. With um but with a T_V_ remote it's gonna be quite limited if we're t saying the main things people want to do is on off channel five, louder, tha that should be relatively simple. Mm. Yeah. Mm-hmm. Yeah, but maybe if you wanna look into that just to just to check. Um, so if we go for the the fifteen to thirty five age group and then of course we're going to get th anyone who's older than thirty five who wants to look young and hip and trendy and has the money, then they'll they'll still go for the same advertising. Yeah, I think we need both. Yeah. Mm. Uh-huh. Uh-huh. So that if that was in the the voice recognition, that would be great. Yeah. Yeah. Watch Sky and yeah. Mm-hmm. But that's definitely a possibility. Yeah. So that you can yell at it, yeah. Yeah. Alright. Mm. Yeah. Yeah. Yeah. Yeah. Mm-hmm. That's but then if you're buying the remote separately, but y you could have something, but i if it was something that you could like stick onto the T_V_ or something, some like a two p if you bought it in a two part pack, so one part attaches to the T_V_. The l Well that's right, but it solves the problem of having different noises. Yeah. Okay, I think we're gonna have to wrap this up um. But if we go away with that that kind of general um specification in mind that we're looking at fifteen to thirty five year olds, we want it to look simple, but still have the buttons so it's easy to use, but only those key buttons, the major buttons and then one sort of menu one, and then voice recognition included as an option um but that obviously needs a little bit more working out as to whether it's really feasible and some of those problems we were mentioning um. What we have to do now is to go back to our little places, complete our questionnaire and some sort of summarisation, which y you'll get immediately by email. Send me your presentations so that I can use them to make the minutes, and then we've got a lunch break and after lunch we go back to our own little stations and have thirty minutes more work. Um I'll put the minutes in that project documents folder, but I'll send you an email when I do it, so that you know. It should be on your desktop, so on the yeah. So I'll put it I'll put them there as soon as I've written them. Yeah, and email them round. Yeah, that would be great. Oh yeah, put them in there. Yeah, then you don't have to email them. No, they're all called something slightly different. Technical requirements and something something, yeah. So, if you put them in there, we'll all be able to see them and refer to them if we need to. Um as to where we're going from here, you're going to look at the components concept. Yeah? Whatever that means. Yeah. You'll be looking you'll be looking at the user interface concept, on something conceptual and you're watching trends to see how we go and surely voice recognition'll fall off the map or something that um we'll keep keep our options op hmm? Components, yeah. No, we have we have after lunch we have thirty minutes to ourselves to prepare, so that's fine, w before lunch we just have to complete the questionnaire and some sort of summary. Okay? Right on time. Okay, so you can I guess we'll see you for lunch in a sec?\"}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 209\n- val: 42\n- test: 28", "## Dataset Creation", "### Curation Rationale\nRefer Above.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "## Licensing Information\nnon-commercial licence: cc-by-4.0", "## Contributions\nThanks to Carletta for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10<n<1000 #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for AMI Corpus", "## Dataset Description", "### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nThe AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.", "#### Synchronised recording devices:\nclose-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, individual pens.", "#### Annotation:\northographic transcription, annotations for many different phenomena (dialog acts, head movement etc. ).\n \n\nAlthough the AMI Meeting Corpus was created for the uses of a consortium that is developing meeting browsing technology, it is designed to be useful for a wide range of research areas. The downloads on this website include videos that are suitable for most purposes, but higher resolution videos are available for researchers engaged in video processing. \n \n\nAll of the signals and transcription, and some of the annotations, have been released publicly under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0).", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\nAMI Corpus is a meeting summarization dataset, consisting of 279 dialogues split into train, test and validation.\nThe first instance in the training set:\n{'id': '30', 'summary': \"The project manager opens the meeting by stating that they will address functional design and then going over the agenda. The industrial designer gives his presentation, explaining how remote controls function and giving personal preference to a clear, simple design that upgrades the technology as well as incorporates the latest features in chip design. The interface specialist gives her presentation next, addressing the main purpose of a remote control. She pinpoints the main functions of on/off, channel-switching, numbers for choosing particular channels, and volume; and also suggests adding a menu button to change settings such as brightness on the screen. She gives preference to a remote that is small, easy to use, and follows some conventions. The group briefly discusses the possibility of using an LCD screen if cost allows it, since it is fancy and fashionable. The marketing expert presents, giving statistical information from a survey of 100 subjects. She prefers a remote that is sleek, stylish, sophisticated, cool, beautiful, functional, solar-powered, has long battery life, and has a locator. They discuss the target group, deciding it should be 15-35 year olds. After they talk about features they might include, the project manager closes the meeting by allocating tasks.\", 'dialogue': \"Speaker A: Cool. Do you wanna give me the little cable thing? Yeah. Cool. Ah, that's why it won't meet. Okay, cool. Yep, cool. Okay, functional requirements. Alright, yeah. It's working. Cool, okay. So what I have, wh where I've got my information from is a survey where the usability lab um observed remote control use with um a hundred subjects and then they gave them a questionnaire. Um so it was all about, you know, how people feel about the look and feel of the remote control, you know. What's the most annoying things about remote controls and um the possibility of speech recognition and L_C_D_ screens in remote control. Not that they actually gave me any answers on the L_C_D_ screens, so I should have taken that bit out, but anyway. Um okay, so. What they found is that people don't like how current remote controls are, so you know, definitely you should be looking at something quite different. Um seventy five percent of users find most remote controls ugly. Uh the other twenty five percent have no fashion sense. Uh eighty percent of users would spend more to get um you know, a nice looking remote control. Um current remote controls, they don't match the user behaviour well, as you'll see on the next slide. Um I dunno what zapping is, but Oh, right. But you have that little thing that comes up at the bottom and tells you what's on. Um okay, fifty percent of users say they only use ten percent of the buttons, so that's going back to what, you know, we were saying earlier about, you know, do you need all the buttons on the remote control, they just make it look ugly. Okay? Cool. Um so this is my little graph thing. Mm k Okay, well, I can send it to all of you. What it is is um it's cones, 'cause I thought they'd be more exciting. Um but ooh where's it go? Back. Oh. Oh yes, cool. Okay, I'm gonna stop playing with the little pointy thing. Um okay, so like what it shows is how much things are used relatively and what you can clearly see from that is the thing that's used most is the channel selection. What you can't see is volume selection, it's a little bit higher than all the others. Yeah, so what the graph shows is that, you know, power, channel selection and volume selection are important, and the rest of them, you know, nobody really uses and so that's the the numbers along the top represent their like um their importance, you know, so on a scale of one to ten, how important is that and, you know, channel selection and volume selection are absolutely essential, and the power, well it's not quite so essential, apparently, although I don't understand how it couldn't be, um and everything else, I think, you know, you can forget about having those buttons on the remote control, 'cause they're just not needed, and they're not used. Okay. This is the bit that the email messed up for me and that's what I was fiddling about with at the beginning of the thing. Okay, cool. So um okay, so this is what people find annoying about remote controls. Uh that they get lost, that the uh you know, they're not intuitive and that they're bad for repetitive strain injury. I think if you're watching enough T_V_ to get repetitive strain injury from um you know, watching T_V_, then that's the least of your problems, but you know, it's up there. Um that yeah. Okay, so um I mean the the R_S_I_ thing would be that, like when you have the computer keyboards and you keep your wrists up would be something that encourages you want something with an ergonomic t design that encourages good use of the remote control and you know, not straining your wrists watching T_V_. Yes. Okay, cool. Right, um sorry this is pink because I was copying and pasting the table, and I didn't have time to white it out again. Um okay, but that shows how people whether they would pay more for voice recognition software. So you can see from that that, you know, younger people to the age of thirty five are quite likely to pay quite a lot more f well quite are quite likely to pay more for voice recognition software, whereas as people get older, they're a bit more sceptical about it and they're less willing to to try it. Um so clearly voice recognition is something to think about, but um you know I d I do wonder how well that would work given that a T_V_, you know, tends to be people talking and um, you know, how are you going to stop it from just flipping channels whilst watching T_V_. Um okay? Cool. Um okay, so these are my personal preferences. So you have sleek, stylish, sophisticated, you know, so something that's, you know, a bit cool. Um you know, functional, so it's useful, but minimalist. Um there's a there's an important thing that, you know, people use when, you know, when you're filling up your home, you know, a lot of people fill up their home with bits of crap, basically, you know, and you've got all this stuff, and you're just like, what the hell is that, who is ever gonna use it? You know, so things should either be functional or beautiful or preferably both, so I think we need to aim for both. Um okay, then a long battery life, like you were talking about earlier and um, you know, I was thinking that solar power would be quite cool because, you know, your remote control just sits there, and you could just sit it in the sunshine and save the environment a bit. Um and then like a locator, so you know, kind of like you have for a mobile phone or not a mobile phone Yeah, that's it, you know. I know, it's weird. My flatmate and I were talking about this on the way into uni this morning and I was like I need to get one for everything. So yeah, so maybe something where you clap and then it beeps, something a kind of sound that you don't often hear on the T_V_, you know, 'cause you don't want your remote control beeping every five minutes, 'cause you you'd then deliberately lose it by throwing it out the window or something. So okay? Cool. That's me. Cat's. Ca. Yeah, I mean that's the thing is that it didn't say in the survey, you know, whether, you know, these are the people that will pay more for a more stylish remote control, but I'm assuming, you know, yes. Well, that's when you go to uni, isn't it? So, you know Yeah. Oh, I've unplugged it. Do you want me to Yeah. Seventy six point three percent. Yeah. Yeah, I kn I mean I know what you're saying about the fifteen to twenty five year olds, but I mean it has been proven that that people of that age group have a higher disposable income because they don't have like I mean, you know, if you're at university, you're paying your rent, but you don't have a mortgage, you don't have a life insurance policy, you don't normally have a car, yeah, so. You're still learning to drive actually, so that just costs more than a car, but yeah. Um so I mean like it is an age group to target, really, I think. No, I mean that's what, that's like fifteen Pounds? You know, I think Yeah, I d I don't know many people without a T_V_. We didn't have a T_V_ last year, and everyone thought we were off our heads, you know. Yeah, I d well we've we've got quite a d decent T_V_. Yeah. I think I think the fact that, you know, ninety one point two percent of fifteen to twenty five year olds are saying yes, I would pay more for a voice recognition remote control, does say quite a lot really. You know, so I mean that and the disposable income and I don't think it's something to ignore, you know. Is not a massive difference, you know. No, do totally. You do have it in your mobile phone though, don't you? Because you have like I mean every mobile phone now has like call this person and it calls them. I don't know. Yeah. S so y you'd maybe need a code word. Do you know what I mean? So like when you say change, except that's being said quite a lot on T_V_, so maybe like, you know, remote. I mean how often do people say remote on T_V_? Although I only watch Charmed, so really I wouldn't know but like so you'd just say remote five, you know, remote ten, remote one two nine. I don't think there's a lot of uh voice recognition remote controls. Yeah, that would be another way to do it. Yeah, but then the code word would be even more important, because I mean Sky advertise on every channel, don't they, you know, so then it would be you'd be watching Charmed, and then the Sky advert would come on and it would change to Sky. Yeah, yeah, and that would be really annoying. Yeah. Do you not think that defeats the object of having voice recognition on a remote control though? Yeah, you know, so you have to have the remote control. It's more like if you lost it and it's down the sofa sometime, you can yell at it and it'll just change it, you can look for it later, yeah. Yeah, yeah, I suppose nearer to you but a b like if you have surround sound then Yeah. Yeah, 'cause it's it's quite important that you don't lose the the bit to locate the remote control. Yeah, definitely, yeah. Oh, so y you want our um PowerPoint presentations in there, hey? Okay. There you go. But is everyone's called functional requirements? Okay, so that's good. That's me done. Okay, cool.\\r\\nSpeaker B: No. Mm. Um um wi on on a what? Oh project project documents, yeah, yeah, yeah, okay. Oh okay, yeah. Yes, I think so. Yeah, the last minute, yeah, yeah. Yeah. Um Okay. Hmm. Mm. Okay, yeah, afterwards, yeah, okay. Thanks. I think we need like some general discussion at the end probably. Yeah. Yeah, I think since since we were discussing some um design issues then I I I would like to continue okay, yeah. Thanks. Oh i Okay, I hope wait. Should it just There's just nothing. Oh right, right, right, um Okay. Nothin okay, something is coming up. No signal? Why? Oh. My my computer went blank now. Adjusting. But I don't see anything I don't see anything on my computer now. This is the problem, but Um. Uh now it's okay. No? No. Oh okay. Okay, that's fine, that's good. Okay, let's start from the beginning. So I'm going to speak about technical functions design uh just like some some first issues that came up. Um 'kay, so the method I was um adopting at this point, it's not um for the for the whole um period of the um all the project but it's just at th at this very moment. Um uh my method was um to look at um other um remote controls, uh so mostly just by searching on the web and to see what um functionality they used. And then um after having got this inspiration and having compared what I found on the web um just to think about what the de what the user really needs and what um what the user might desire as additional uh functionalities. And yeah, and then just to um put the main function of the remote control in in words. Um so the findings uh were um that the main function of the remote control is is just sending messages to the television set, so this quite straightforward. And uh w some of the main functions would be switching on, switching off, uh then the user would like to switch the channel um for example just m changing to the next channel to to flip through all all of the possible channels, or then mm uh the other possibility would be that um she might just want to choose one particular channel, so we would need the numbers. And and also the volume is very important. Um um I als okay. 'Kay. Um um among the findings I found that m m most of the curr mm presently available remote controls also include other mm functionalities um in their design, like operating a V_C_R_, but they don't seem to be able to deal with D_V_D_ players, but then there are surely there are many other functionali functions that could possibly be added to them, but according to the last minute update um actually um we do not want to have all this complicated functions added to our design. So my personal preferences would be uh to keep the mm the whole remote control small um just like the physical size. And then it must be easy to use, so it must follow some conventions um like whereabouts you find the on off button and maybe the colour tends to be red or something. Um then yeah, the must-have buttons would be on off and then the channel numbers and then um the one that allows us to go to the next or the previous channel, and then volume has to be there. But then um other functionalities um could be just uh there could be a menu button and you could change things on the screen then, um for example brightness and mm similar functions could be just um done through the menu. And yeah, the last question I had about whether we wanted to incorporate n uh more functionalities, the answer was already no because of the last minute update. So at the for the time being that's uh that's all. If you have questions Yeah, and also it's it's um other question is uh because there are so many different And there are so many different things that could possibly be included because besides video and D_V_D_ there are the mm um video C_D_s and whatever, so it might be problematic to to choose between all these possible things. Um well, I think the buttons are still mm kind of the most um easy for the user to use, I mean um what other options would you have? A little screen or something, but this would be really kind of I think a lot of learning for the user and and I mean the user just wants to get um get a result um quickly, not to spend time in like um giving several orders um I dunno. I think I th I would I would think the put the buttons, but if if you have other mm proposals um. Yeah. Yeah. Mm-hmm. Yep. Uh am I going in the right direction? No. Wait. Okay, here it comes. Okay, here you are. Um that's very good, very interesting. Mm-hmm. Yeah. Yeah, you share a television or something that yeah. It was seventy something, yeah, yeah. Yeah this this is not unaffordable, but the problem is whether people need it, whether they do have a T_V_ to use its full Yeah. Common, the students yeah, yeah. The s the stu yeah, and the remote control might not yeah, it might not even function with the old T_V_. Yeah, we're still yeah. Or w maybe we can just kind of uh uh Yeah, but at the same time I think maybe we can we can just decide to to have both of these groups as our target, because actually I mean they're all still re young people. Yeah. Yeah. Yeah. Yeah. An Yeah. Yeah. Yeah but uh um Yeah, yeah sure, yeah, yeah. Yeah. Yeah, w well now the v the voice recognition if if it works wonderfully w we could possibly do away with all buttons, but I think this is not really the right moment yet, because people are just so used to buttons and um, yeah it's it's kind of safer, so we we need both, so the voice recognition would be just an extra, it wouldn't really reduce the size of the remote. Yeah but m but on the other hand, remote control isn't as close to you you probably might just just uh speak into it and and the T_V_ would be already further away, so it might not pick up the other things coming from there. Yeah, but then the remote control I think I mean um the idea is kind of it's it's not that it's sitting there on on top of the television, because then you could already yell at the television and you wouldn't you you wouldn't need the remote control, so the remote control is still something you keep n near yourself. Yeah, yeah, yeah. No, but I I I was just defending the the fact why why we want to keep the remote control close to us, a and uh not to yell at it from the distance. Okay. Oh yeah, yeah. Okay, yeah, mm-hmm. The major ones, yeah. Mm-hmm. Mm-hmm. Yeah. Did you find it? It's just yeah, yeah. Oh so so we'll just put them i there, we we yeah, w we won't even okay. Yeah. Yeah. Uh something conceptual, yeah. Hmm. Sorry, but um the next meeting um are we going to have it um right after lunch or shall we prepare our To prepare, okay, yeah, that's good. Okay. Cool. Okay, see you.\\r\\nSpeaker C: Mm. You said uh targ target groups, what does that mean? Uh okay, 'kay. So are Okay. Alright. I can go first, yeah. Right. Um so f from the Right sure. Uh okay. So n uh with uh with regard to the uh working design of this uh uh remote control uh I've identified um a few basic uh components of the remote and uh se uh from the design, functional design perspective um w I c we can now uh know wha what exactly the components are and how how they work together with each other. So this is the method that uh I'll mostly be following in my um in my uh role. Um the identification of the components, uh and uh since since I'm dealing only with the technical aspects, I would need feedback from the marketing person uh and uh from the user interface person. Uh we'll then integrate this into the product design at a technical level and uh basically update and come up with a new design, so it's a cyclical process. Okay, so these were the basic findings from today. The last three bullets have been integrated from uh the last minute uh email. Uh I just quickly jotted them down. Um so basically uh the as I told you the identification of how the remote control works and what are the various parts to it uh and what are the different processes um and how the parts uh communicate with each other. Um okay, so e the mee email said that teletext is now outdated, so we need to do away with that functionality of the remote control. Um also uh the remote control should be used only for television, because incorporating other features um makes it more comp complex. And the reason why teletext is outdated because uh of internet and uh the availability of internet over television. How however, our our remote control would only be dealing uh with the the use for television, in order to keep things simple. Um also the management wants that um our design should be unique uh it so it should incorporate um colour and the slogan uh that our company um has it as its standard. Okay, so he he here is a functional overview of the remote control. Um there's basically an energy source at the heart uh which feeds into the chip and the user interface. The user interf interface communicates with the chip, so I'll basic go over to the Okay. So if uh if this is our energy source and this is a cell, uh it communicates uh it feeds energy into the into the chip, which basically finds out h uh how how to do everything. There is a user interface here. So whe when the user presses a button, it feeds into the chip and the chip then generates a response and takes the response to an infrared terminal, um which then so the output of the chip is an infrared bit code, which is then communicated to the remote site, which h has an infrared receiver. Um the there can be uh a bulb here or something to indicate whether the remote is on or communicating. Um so these are the essent so a all the functionality of the remote control, whatever new functions that we need to do, um make the chip more complicated uh and bigger, basically. Okay. Um so i in my personal preferences um I'm hoping that we can ke keep the design as simple and clear as possible. This would uh help us uh to upgrade our technology at a future point of time. And uh also if we can incorporate uh the latest features in our chip design, so that our um uh remote control does not become outdated soon and it's compatible with mot most uh televisions. That's about it. So anything that you would like to know or No, I don't have any idea about what each component costs. Um yeah. Anything else? Yeah. Certainly, yeah. So so tha yeah, we definitely need to operate within our constraints, but um unfortunately I I do not have any data, so uh I just identified the functional components for that. Yeah, okay. Yeah. Mm 'kay. I it'll take some time. Oh, there it is, yeah. It'll come up, it um uh no signal. Yeah yeah, it says something now, adjusting Okay. Oh, that's strange. Okay. And one more time. Mm. Sorry, cou could you go back for a second? Uh switching on off channel, uh volume, okay, that's great. So in the u user interface requirements uh uh uh we we have been able to identify what are the basic buttons that we do want. Um but um so so at this stage, uh how we go about implementing those button we will not identify or I mean in we can completely do away with buttons and uh have some kind of a fancy user interface or something like that. But uh is is there any uh uh any thoughts on that? Right. Yeah, and it'll make the costs yeah. Right. Uh I think the co costs will also play a big role when we come to know about them. So well we can probably wait until t we have more knowledge on that. Uh i if the if the costs allow, we can have like an L_C_D_ display and uh with um because we do want something fancy and fashionable as well. So yeah? Cool. try to press oh, okay, yep. Mm. Right. Mm-hmm. Mm. Right. Mm-hmm. Hmm. Right. Mm. Mm. Mm. Some kind of a ring, some Right. Hmm. Okay, that's great, thanks. Mm. I think one of the very interesting things that came up in um uh Ka Kate Cat Cat's uh presentation was um uh this this issue of uh uh like voice recognition being more popular with uh younger people. So if we need to have a target group um then uh I think as far as the m motto of our company is concerned, if we want to have something sleek and uh you know, good looking uh we are better off targeting a younger audience then um you know, people who are comparatively elderly. Um. Right. Right. Bu but but the survey did say that f things like voice recognition are more popular with them, so if you want to put in something stylish, then uh th it'll certainly be more popular with this i ye with the younger people as compared to older people, yeah. Right, and Right. Mm. Right. But uh still, if if you can go back to that slide and uh, how popular was it? Oh, oh, okay. That's alright, if you can just look it up on your computer, wh uh um people between twenty five to thirty five, uh how popular was so it was sti still still quite popular amongst them. So even they are seventy six percent, is that high amount? Alright. Yeah. So you're more likely to b Yeah. Yeah. Mm. Bu but even even in the case of twenty five to thirty five it's quite popular, right? So mm uh are are are Mm. Mm. Um I was having a a general outlook on um m most like sophisticated features, but voice recognition itself I'm not very sure about, because one of the p uh things that Cat pointed out was uh uh how do we go about implementing it? Uh and uh Yeah. But how frequently do we use it anyway and um uh h ho how good is it, you know uh voice recognition softwares are still quite uh Yeah. Right. Right. Okay. O Right. Mm. Right. Yeah. Okay, so it seems like a feasible thing to implement uh for for a limited yeah. Mm. W What uh Mm. What wh uh what I was thinking is that there is this uh separation between what the channels are on T_V_ and how they are numbered on the remote control. If we can do with away with that, our product can be really popular uh in the sense that uh a person can say, I want to watch uh I_T_V_ one instead of saying that I want to go onto channel number forty five. Yeah, so if uh if something like that can be incorporated, some kind of Mm-hmm. Alright. Yeah, that's Right. Mm. Mm yeah and it might become very difficult from a distance for the television to understand what you're saying because of the noise factor for the remote control being cl I mean it'll it'll mm. Yeah. Mm. So uh wh another thing uh that can be used is that uh there can be a beeper button on the T_V_, so you can go and press that button and um and the remote control, wherever it is, it'll beep, so we we can probably come to know where it is. Right, yeah, yeah, yeah. Alright, yeah. Right. Okay. So where exactly is this i Ah, okay. Yeah. Yeah, yeah in that one, right yeah. No. Right. I guess I'll find out. Wha what was it again that I was supposed to look into? Con components, oh.\\r\\nSpeaker D: All hooked up. Okay, so now we are here at the functional design meeting. Um hopefully this meeting I'll be doing a little bit less talking than I did last time 'cause this is when you get to show us what you've been doing individually. The agenda for the meeting, I put it in the sh shared documents folder. I don't know if that meant that you could see it or not. Did anyone? No. Oh well. Um I'll try and do that for the next meeting as well so if you check in there, there's a shared project documents folder. Um and it should be in there. Project documents, yeah. So I'll put it in there. Is it best if I send you an email maybe, to let you know it's there? Yep. I'll do that next time. Um I'll act as secretary for this meeting and just take minutes as we go through, and then I'll send them to you after the meeting. The main the main focus of this meeting is your presentations that you've been preparing during the time, so we'll go through each of you one by one. Um then we need to briefly discuss the new project requirements that were sent to us. I just sent at the last minute, I'm sorry about that, but we can see how that affects what you were you were doing. Um and then we need to, by the end of the meeting come to some kind of decision on who our target group's going to be and what the functions of the remote control that's the the main goal is to come up with those two things, target group and functions of the remote control. And we've got forty minutes to do that in. So I would say yeah? As uh who it is that we're going to be trying to sell this thing to, yeah. So we need to yeah, we need to have a fairly defined group that that we want to focus on and then look at the functions um of the dem remote control itself. So with that I think it's best if I hand over to you. Does anyone have a preference for going first? You wanna go first? Okay, so we need to unplug my laptop and plug in yours. I assume we just pull it out? Just before you start, to make it easier, would you three mind emailing me your presentations? Once we you don't have to do it now but when once you go back, just so that I don't have to scribble everything down. Hmm. Mm-hmm. Okay. Do you have any um i idea about costs at this point? Br Okay. 'Cause that's something to consider, I guess, if we're if we're using more advanced technology, it might increase the price. Yeah. That's fine. Are there any more questions, or shall we just skip straight to the next one and then we can discuss all of them together at the end? Yeah, I think that will do. Okay, so do you want to Yes, shall shall we pull this up? I think that has to come out of there. Yeah. Yeah, I thought those last minute things, they're gonna hit you the worst. It ta takes a little Oh, and have you you need to then also press on yours, function F_ eight, so the blue function key at the bottom and F_ eight. Now it's coming, computer no signal. Maybe again? Okay, adjusting. There we go, there we go. Oh, if you press if you press function and that again there's there's usually three modes, one where it's only here, one where it's only there, and one where it's both. Okay, so one more time. Should yeah just wait for a moment, adjusting. Okay. Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. Yeah. If I mean that was the the directive that came through from management, but if we had a a decent case for that we really think it's important to include video and D_V_D_, I could get back to them and see. It's w it's just whether it's worth arguing about. Mm-hmm. Yeah. Mm-hmm. Okay. Are there any questions for clarification of Maarika before we go on to the next one? Mm-hmm. Mm. Mm. Mm-hmm. Sure, we can discuss that maybe after the next one. Do you want to yeah. Oh, I'm getting hungry. You set? Uh we need to do the function key thing so that it comes up on here. Hello. Is it plugged in prop it's working? Okay. Excellent. It's um switching between channels, sort of randomly going through. Mm. Ooh, that's a bit difficult to see. If you explain it to us it'll be fine. Yeah. I liked the, I liked the litt ooh come back. No. Okay. Mm-hmm, that's the next one along, yeah? Mm-hmm. Mm-hmm. Mm-hmm. Mm-hmm. The remote control. Mm-hmm. That's alright. Mm. Keys and things like that, yeah. Whistle and it screams at you, yeah. Mm-hmm. That's you, excellent. Um. I'm just gonna tick yes. So, we've got about ten, fifteen minutes to discuss Mm-hmm. Yeah. Mm-hmm. Yeah. Then again I guess the th where it was most popular was the fifteen to twenty five bracket and the I don't know how often they're buying televisions. Yeah, but you don't have much money, generally. I would've thought it's it's more that twenty five to thirty five, when people are really moving out and they've got their first job and they want their nice toys and O oh it's on sorry, we unplugged it. Here, let me Yeah. Mm-hmm. Yeah. Yeah, they've got no commitments and usually not a car and all of those things. Kids. Yeah. Yeah, and if we're if we're talking twenty five Euros as a price, that's not unaffordable, even for young people. Yeah. Yeah. But do they But the T_V_s are often kind of someone's old T_V_ that's blah blah and be a bit strange to have a fancy rome remote. Mm. Yeah. Yeah. Yeah. Yeah. Yeah, if we ta if we take fifteen to thirty five, but that then does imply that we should try and incorporate voice recognition. Is that gonna have a an implication for the technical specs? Mm-hmm. Yeah. Yeah. With um but with a T_V_ remote it's gonna be quite limited if we're t saying the main things people want to do is on off channel five, louder, tha that should be relatively simple. Mm. Yeah. Mm-hmm. Yeah, but maybe if you wanna look into that just to just to check. Um, so if we go for the the fifteen to thirty five age group and then of course we're going to get th anyone who's older than thirty five who wants to look young and hip and trendy and has the money, then they'll they'll still go for the same advertising. Yeah, I think we need both. Yeah. Mm. Uh-huh. Uh-huh. So that if that was in the the voice recognition, that would be great. Yeah. Yeah. Watch Sky and yeah. Mm-hmm. But that's definitely a possibility. Yeah. So that you can yell at it, yeah. Yeah. Alright. Mm. Yeah. Yeah. Yeah. Yeah. Mm-hmm. That's but then if you're buying the remote separately, but y you could have something, but i if it was something that you could like stick onto the T_V_ or something, some like a two p if you bought it in a two part pack, so one part attaches to the T_V_. The l Well that's right, but it solves the problem of having different noises. Yeah. Okay, I think we're gonna have to wrap this up um. But if we go away with that that kind of general um specification in mind that we're looking at fifteen to thirty five year olds, we want it to look simple, but still have the buttons so it's easy to use, but only those key buttons, the major buttons and then one sort of menu one, and then voice recognition included as an option um but that obviously needs a little bit more working out as to whether it's really feasible and some of those problems we were mentioning um. What we have to do now is to go back to our little places, complete our questionnaire and some sort of summarisation, which y you'll get immediately by email. Send me your presentations so that I can use them to make the minutes, and then we've got a lunch break and after lunch we go back to our own little stations and have thirty minutes more work. Um I'll put the minutes in that project documents folder, but I'll send you an email when I do it, so that you know. It should be on your desktop, so on the yeah. So I'll put it I'll put them there as soon as I've written them. Yeah, and email them round. Yeah, that would be great. Oh yeah, put them in there. Yeah, then you don't have to email them. No, they're all called something slightly different. Technical requirements and something something, yeah. So, if you put them in there, we'll all be able to see them and refer to them if we need to. Um as to where we're going from here, you're going to look at the components concept. Yeah? Whatever that means. Yeah. You'll be looking you'll be looking at the user interface concept, on something conceptual and you're watching trends to see how we go and surely voice recognition'll fall off the map or something that um we'll keep keep our options op hmm? Components, yeah. No, we have we have after lunch we have thirty minutes to ourselves to prepare, so that's fine, w before lunch we just have to complete the questionnaire and some sort of summary. Okay? Right on time. Okay, so you can I guess we'll see you for lunch in a sec?\"}", "### Data Fields\n- dialogue: text of dialogue.\n- summary: human written summary of the dialogue.\n- id: unique file id of an example.", "### Data Splits\n- train: 209\n- val: 42\n- test: 28", "## Dataset Creation", "### Curation Rationale\nRefer Above.", "### Who are the source language producers?\nlinguists", "### Who are the annotators?\nlanguage experts", "## Licensing Information\nnon-commercial licence: cc-by-4.0", "## Contributions\nThanks to Carletta for adding this dataset." ]
[ 79, 8, 4, 23, 134, 39, 138, 5, 6, 8789, 32, 17, 5, 11, 12, 11, 18, 14 ]
[ "passage: TAGS\n#task_categories-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10<n<1000 #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n# Dataset Card for AMI Corpus## Dataset Description### Links\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL### Dataset Summary\nThe AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.#### Synchronised recording devices:\nclose-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, individual pens.#### Annotation:\northographic transcription, annotations for many different phenomena (dialog acts, head movement etc. ).\n \n\nAlthough the AMI Meeting Corpus was created for the uses of a consortium that is developing meeting browsing technology, it is designed to be useful for a wide range of research areas. The downloads on this website include videos that are suitable for most purposes, but higher resolution videos are available for researchers engaged in video processing. \n \n\nAll of the signals and transcription, and some of the annotations, have been released publicly under the Creative Commons Attribution 4.0 International Licence (CC BY 4.0).### Languages\nEnglish## Dataset Structure" ]
d139909f9f053a68e2ef99acabe4f1d0d78c2ee1
For the detail, see [github:mmdjiji/bert-chinese-idioms](https://github.com/mmdjiji/bert-chinese-idioms). [preprocess.js](preprocess.js) is a Node.JS script to generate the data for training the language model.
mmdjiji/bert-chinese-idioms
[ "license:gpl-3.0", "region:us" ]
2022-06-28T10:13:43+00:00
{"license": "gpl-3.0"}
2022-06-28T10:41:58+00:00
[]
[]
TAGS #license-gpl-3.0 #region-us
For the detail, see github:mmdjiji/bert-chinese-idioms. URL is a Node.JS script to generate the data for training the language model.
[]
[ "TAGS\n#license-gpl-3.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-gpl-3.0 #region-us \n" ]
f069243d3560f018e22799d15d67c64e393b7977
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector * Dataset: catalonia_independence To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-2e072638-8015092
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T12:08:56+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["catalonia_independence"], "eval_info": {"task": "multi_class_classification", "model": "JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector", "metrics": [], "dataset_name": "catalonia_independence", "dataset_config": "catalan", "dataset_split": "test", "col_mapping": {"text": "TWEET", "target": "LABEL"}}}
2022-06-28T12:09:33+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector * Dataset: catalonia_independence To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector\n* Dataset: catalonia_independence\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector\n* Dataset: catalonia_independence\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 100, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JonatanGk/roberta-base-bne-finetuned-catalonia-independence-detector\n* Dataset: catalonia_independence\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
2525155fd1f996230d5b5776ccef80397b640d3f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector * Dataset: catalonia_independence To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-2e072638-8015093
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T12:09:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["catalonia_independence"], "eval_info": {"task": "multi_class_classification", "model": "JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector", "metrics": [], "dataset_name": "catalonia_independence", "dataset_config": "catalan", "dataset_split": "test", "col_mapping": {"text": "TWEET", "target": "LABEL"}}}
2022-06-28T12:09:36+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector * Dataset: catalonia_independence To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector\n* Dataset: catalonia_independence\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector\n* Dataset: catalonia_independence\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 99, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector\n* Dataset: catalonia_independence\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
34616fcfdf08e018d6924aba7a147d0e77a01f10
# Greentext Dataset This is content pulled from various archives to create a "greentext bot" or sorts using GPT-JT. Really, just a dumb joke I made with some friends. ## Biases & Limitations This dataset contains charaters such as \n and u2019d that need to be filtered out manually. Needless to say, this dataset contains *many* instances of profanity & biases, as it is trained on data from hell. I don't recommend actually using any of this.
DarwinAnim8or/greentext
[ "task_categories:text2text-generation", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:monolingual", "language:en", "license:unknown", "grug", "internet", "greentext", "region:us" ]
2022-06-28T13:44:54+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text2text-generation"], "task_ids": [], "pretty_name": "Greentext Dataset\n\nThis is content pulled from various archives to create a \"greentext bot\" or sorts using GPT-JT-8Bit. ", "tags": ["grug", "internet", "greentext"]}
2023-01-24T18:32:57+00:00
[]
[ "en" ]
TAGS #task_categories-text2text-generation #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #language-English #license-unknown #grug #internet #greentext #region-us
# Greentext Dataset This is content pulled from various archives to create a "greentext bot" or sorts using GPT-JT. Really, just a dumb joke I made with some friends. ## Biases & Limitations This dataset contains charaters such as \n and u2019d that need to be filtered out manually. Needless to say, this dataset contains *many* instances of profanity & biases, as it is trained on data from hell. I don't recommend actually using any of this.
[ "# Greentext Dataset\n\nThis is content pulled from various archives to create a \"greentext bot\" or sorts using GPT-JT. \nReally, just a dumb joke I made with some friends.", "## Biases & Limitations\nThis dataset contains charaters such as \\n and u2019d that need to be filtered out manually.\nNeedless to say, this dataset contains *many* instances of profanity & biases, as it is trained on data from hell. \nI don't recommend actually using any of this." ]
[ "TAGS\n#task_categories-text2text-generation #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #language-English #license-unknown #grug #internet #greentext #region-us \n", "# Greentext Dataset\n\nThis is content pulled from various archives to create a \"greentext bot\" or sorts using GPT-JT. \nReally, just a dumb joke I made with some friends.", "## Biases & Limitations\nThis dataset contains charaters such as \\n and u2019d that need to be filtered out manually.\nNeedless to say, this dataset contains *many* instances of profanity & biases, as it is trained on data from hell. \nI don't recommend actually using any of this." ]
[ 70, 44, 80 ]
[ "passage: TAGS\n#task_categories-text2text-generation #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-monolingual #language-English #license-unknown #grug #internet #greentext #region-us \n# Greentext Dataset\n\nThis is content pulled from various archives to create a \"greentext bot\" or sorts using GPT-JT. \nReally, just a dumb joke I made with some friends.## Biases & Limitations\nThis dataset contains charaters such as \\n and u2019d that need to be filtered out manually.\nNeedless to say, this dataset contains *many* instances of profanity & biases, as it is trained on data from hell. \nI don't recommend actually using any of this." ]
45bf3946df5ea6c44a6470fd97c24d6931f66672
# NEMO-Corpus - The Hebrew Named Entities and Morphology Corpus ## Config and Usage Config: * flat_token - flatten tags * nested_token - nested tags * flat_morph - flatten tags with morphologically presegmentized tokens * nested_morph - nested tags with morphologically presegmentized tokens Note: It seems that a couple of samples for the flat_token and nested_token are mistakenly presegmented, and as a result, these samples have white space in the token. ```python from datasets import load_dataset # the main corpus ds = load_dataset('imvladikon/nemo_corpus', "flat_token") for sample in ds["train"]: print(sample) # the nested corpus ds = load_dataset('imvladikon/nemo_corpus', "nested_morph") ``` Getting classes and encoding/decoding could be done through these functions: ``` idx2label = dataset["train"].features["ner_tags"].feature.int2str label2idx = dataset["train"].features["ner_tags"].feature.str2int ``` or just use raw_tags field. ## Fields available fields (flat): * "id" * "sentence" * "tokens" * "raw_tags" * "ner_tags" Example of the one record for `flat`: ```json {'id': '0', 'tokens': ['"', 'תהיה', 'נקמה', 'ו', 'בגדול', '.'], 'sentence': '" תהיה נקמה ו בגדול .', 'raw_tags': ['O', 'O', 'O', 'O', 'O', 'O'], 'ner_tags': [24, 24, 24, 24, 24, 24]} ``` Example of the one record for `nested`: ```json {'id': '0', 'tokens': ['"', 'תהיה', 'נקמה', 'ו', 'בגדול', '.'], 'ner_tags': [24, 24, 24, 24, 24, 24], 'ner_tags_2': [24, 24, 24, 24, 24, 24], 'ner_tags_3': [24, 24, 24, 24, 24, 24], 'ner_tags_4': [24, 24, 24, 24, 24, 24]} ``` ## Dataset Description it's README.md of the [original repository](https://github.com/OnlpLab/NEMO-Corpus) Named Entity (NER) annotations of the Hebrew Treebank (Haaretz newspaper) corpus, including: morpheme and token level NER labels, nested mentions, and more. We publish the NEMO corpus in the TACL paper [*"Neural Modeling for Named Entities and Morphology (NEMO<sup>2</sup>)"*](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00404/107206/Neural-Modeling-for-Named-Entities-and-Morphology) [1], where we use it in extensive experiments and analyses, showing the importance of morphological boundaries for neural modeling of NER in morphologically rich languages. Code for these models and experiments can be found in the [NEMO code repo](https://github.com/OnlpLab/NEMO). ## Main features: 1. Morpheme, token-single and token-multi sequence labels. Morpheme labels provide exact boundaries, token-multi provide partial sub-word morphological but no exact boundaries, token-single provides only token-level information. 1. All annotations are in `BIOSE` format (`B`=Begin, `I`=Inside, `O`=Outside, `S`=Singleton, `E`=End). 1. Widely-used OntoNotes entity category set: `GPE` (geo-political entity), `PER` (person), `LOC` (location), `ORG` (organization), `FAC` (facility), `EVE` (event), `WOA` (work-of-art), `ANG` (language), `DUC` (product). 1. NEMO includes NER annotations for the two major versions of the Hebrew Treebank, UD (Universal Dependency) and SPMRL. These can be aligned to the other morphosyntactic information layers of the treebank using [bclm](https://github.com/OnlpLab/bclm) 1. We provide nested mentions. Only the first, widest, layer is used in the NEMO<sup>2</sup> paper. We invite you to take on this challenge! 1. Guidelines used for annotation are provided [here](./guidelines/). 1. Corpus was annotated by two native Hebrew speakers of academic education, and curated by the project manager. We provide the original annotations made by the annotators as well to promote work on [learning with disagreements](https://sites.google.com/view/semeval2021-task12/home). 1. Annotation was performed using [WebAnno](https://webanno.github.io/webanno/) (version 3.4.5) ## Legend for Files and Folder Structure 1. The two main [data](./data/) folders are [ud](./data/ud/) and [spmrl](./data/spmrl/), corresponding to the relevant Hebrew Treebank corpus version. 1. Both contain a `gold` folder ([spmrl/gold](./data/spmrl/gold/), [ud/gold](./data/ud/gold/)) of gold curated annotations. 1. Each `gold` folder contains files of the three input-output variants (morph, token-multi, token-single), for each of the treebank splits (train,dev,test). 1. Each `gold` folder also contains a `nested` subfolder ([spmrl/nested](./data/spmrl/gold/nested/), [ud/nested](./data/ud/gold/nested/)), which contains all layers of nested mentions (the first layer is the layer used in the non-nested files, and in the NEMO<sup>2</sup> paper [1]) 1. The `ud` folder also contains an [ab_annotators](./data/ud/ab_annotators/) folder. This folder contains the original annotations made by each annotator (named `a`, `b`), including first-layer and nested annotatations. 1. *\*UPDATE 2021-09-06\** `ud` folder now contains a [pilot_annotations](./data/ud/pilot_annotations/) folder. This folder contains the original annotations made by each annotator in our two phase pilot (phase I - sentences 1-200 of dev; phase II - sentences 201-400 of dev). ## Basic Corpus Statistics | | train | dev | test | |------------------------------| --:| --:| --:| | Sentences | 4,937 | 500 | 706 | | Tokens | 93,504 | 8,531 | 12,619 | | Morphemes | 127,031 | 11,301 | 16,828 | | All mentions | 6,282 | 499 | 932 | | Type: Person (PER) | 2,128 | 193 | 267 | | Type: Organization (ORG) | 2,043 | 119 | 408 | | Type: Geo-Political (GPE) | 1,377 | 121 | 195 | | Type: Location (LOC) | 331 | 28 | 41 | | Type: Facility (FAC) | 163 | 12 | 11 | | Type: Work-of-Art (WOA) | 114 | 9 | 6 | | Type: Event (EVE) | 57 | 12 | 0 | | Type: Product (DUC) | 36 | 2 | 3 | | Type: Language (ANG) | 33 | 3 | 1 | ## Aligned Treenbank Versions The NEMO corpus matches the treebank version of [bclm v.1.0.0](https://github.com/OnlpLab/bclm/releases/tag/v1.0.0-alpha). This version is based on the [HTB UD v2.2](https://github.com/UniversalDependencies/UD_Hebrew-HTB/releases/tag/r2.2) and the [latest SPMRL HTB version](https://github.com/OnlpLab/HebrewResources/tree/102674bb030f5836e1ab827feb63954ad7a6f8fe/HebrewTreebank/hebtb). The changes contain (but might not be limited to the following): 1. Flagged and dropped duplicate and leaking sentences (between train and test). In addition to the sentences already removed in the bclm v1.0.0 HTB version, the following duplicate sentences were dropped as well (SPMRL sentence IDs): 5438, 5444, 5445, 5446, 5448, 5449, 5450, 5451, 5453, 5459 (in the bclm dataframes, these are marked in the `duplicate_sent_id` column). To read the treebank (UD/SPMRL) in a way that matches the NEMO corpus, you can use the following: ```python import bclm dropped = [5438, 5444, 5445, 5446, 5448, 5449, 5450, 5451, 5453, 5459] spdf = bclm.read_dataframe('spmrl') # load SPMRL treebank dataframe global_dropped = [spdf[spdf.sent_id==d].global_sent_id.iat[0] for d in dropped] uddf = bclm.read_dataframe('ud') # load UD treebank dataframe uddf = uddf[(~uddf.global_sent_id.isin(global_dropped))] # remove extra duplicates spdf = spdf[(~spdf.sent_id.isin(dropped))] # remove extra duplicates # The resulting dataframes contain gold morph NER labels in the `biose_layer0`, `biose_layer1`... columns. ``` 2. The UD treebank contains many more duplicates. In this version: all sentences exist in both UD and SPMRL versions, and all sentences and tokens are aligned between UD and SPMRL. 2. Fixed numbers that were originally reversed. 2. Fixed mismatches between tokens and morphemes. 2. Added Binyan feature. 2. No individual morphemes or tokens were added or removed, only complete sentences. ## Evaluation An evaluation script is provided in the [NEMO code repo](https://github.com/OnlpLab/NEMO#evaluation) along with evaluation instructions. ## Citations ##### [1] If you use the NEMO corpus in your research, please cite the NEMO<sup>2</sup> paper: ```bibtex @article{10.1162/tacl_a_00404, author = {Bareket, Dan and Tsarfaty, Reut}, title = "{Neural Modeling for Named Entities and Morphology (NEMO2)}", journal = {Transactions of the Association for Computational Linguistics}, volume = {9}, pages = {909-928}, year = {2021}, month = {09}, abstract = "{Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities do not necessarily coincide with token boundaries, rather, they respect morphological boundaries. To address NER in MRLs we then need to answer two fundamental questions, namely, what are the basic units to be labeled, and how can these units be detected and classified in realistic settings (i.e., where no gold morphology is available). We empirically investigate these questions on a novel NER benchmark, with parallel token- level and morpheme-level NER annotations, which we develop for Modern Hebrew, a morphologically rich-and-ambiguous language. Our results show that explicitly modeling morphological boundaries leads to improved NER performance, and that a novel hybrid architecture, in which NER precedes and prunes morphological decomposition, greatly outperforms the standard pipeline, where morphological decomposition strictly precedes NER, setting a new performance bar for both Hebrew NER and Hebrew morphological decomposition tasks.}", issn = {2307-387X}, doi = {10.1162/tacl_a_00404}, url = {https://doi.org/10.1162/tacl\_a\_00404}, eprint = {https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\_a\_00404/1962472/tacl\_a\_00404.pdf}, } ``` ##### [2] Please cite the Hebrew Treebank as well, described the following paper: ```bibtex @article{sima2001building, title={Building a tree-bank of modern Hebrew text}, author={Sima’an, Khalil and Itai, Alon and Winter, Yoad and Altman, Alon and Nativ, Noa}, journal={Traitement Automatique des Langues}, volume={42}, number={2}, pages={247--380}, year={2001}, publisher={Citeseer} } ``` ##### [3] The UD version of the Hebrew Treebank is described in: ```bibtex @inproceedings{sade-etal-2018-hebrew, title = "The {H}ebrew {U}niversal {D}ependency Treebank: Past Present and Future", author = "Sade, Shoval and Seker, Amit and Tsarfaty, Reut", booktitle = "Proceedings of the Second Workshop on Universal Dependencies ({UDW} 2018)", month = nov, year = "2018", address = "Brussels, Belgium", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/W18-6016", doi = "10.18653/v1/W18-6016", pages = "133--143", abstract = "The Hebrew treebank (HTB), consisting of 6221 morpho-syntactically annotated newspaper sentences, has been the only resource for training and validating statistical parsers and taggers for Hebrew, for almost two decades now. During these decades, the HTB has gone through a trajectory of automatic and semi-automatic conversions, until arriving at its UDv2 form. In this work we manually validate the UDv2 version of the HTB, and, according to our findings, we apply scheme changes that bring the UD HTB to the same theoretical grounds as the rest of UD. Our experimental parsing results with UDv2New confirm that improving the coherence and internal consistency of the UD HTB indeed leads to improved parsing performance. At the same time, our analysis demonstrates that there is more to be done at the point of intersection of UD with other linguistic processing layers, in particular, at the points where UD interfaces external morphological and lexical resources.", } ```
imvladikon/nemo_corpus
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-reuters-corpus", "language:he", "region:us" ]
2022-06-28T15:51:45+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["he"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-reuters-corpus"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "train-eval-index": [{"config": "nemo_corpus", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "validation", "test_split": "test"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}, "metrics": [{"type": "seqeval", "name": "seqeval"}]}]}
2023-11-24T10:36:57+00:00
[]
[ "he" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-reuters-corpus #language-Hebrew #region-us
NEMO-Corpus - The Hebrew Named Entities and Morphology Corpus ============================================================= Config and Usage ---------------- Config: * flat\_token - flatten tags * nested\_token - nested tags * flat\_morph - flatten tags with morphologically presegmentized tokens * nested\_morph - nested tags with morphologically presegmentized tokens Note: It seems that a couple of samples for the flat\_token and nested\_token are mistakenly presegmented, and as a result, these samples have white space in the token. Getting classes and encoding/decoding could be done through these functions: or just use raw\_tags field. Fields ------ available fields (flat): * "id" * "sentence" * "tokens" * "raw\_tags" * "ner\_tags" Example of the one record for 'flat': Example of the one record for 'nested': Dataset Description ------------------- it's URL of the original repository Named Entity (NER) annotations of the Hebrew Treebank (Haaretz newspaper) corpus, including: morpheme and token level NER labels, nested mentions, and more. We publish the NEMO corpus in the TACL paper *"Neural Modeling for Named Entities and Morphology (NEMO2)"* [1], where we use it in extensive experiments and analyses, showing the importance of morphological boundaries for neural modeling of NER in morphologically rich languages. Code for these models and experiments can be found in the NEMO code repo. Main features: -------------- 1. Morpheme, token-single and token-multi sequence labels. Morpheme labels provide exact boundaries, token-multi provide partial sub-word morphological but no exact boundaries, token-single provides only token-level information. 2. All annotations are in 'BIOSE' format ('B'=Begin, 'I'=Inside, 'O'=Outside, 'S'=Singleton, 'E'=End). 3. Widely-used OntoNotes entity category set: 'GPE' (geo-political entity), 'PER' (person), 'LOC' (location), 'ORG' (organization), 'FAC' (facility), 'EVE' (event), 'WOA' (work-of-art), 'ANG' (language), 'DUC' (product). 4. NEMO includes NER annotations for the two major versions of the Hebrew Treebank, UD (Universal Dependency) and SPMRL. These can be aligned to the other morphosyntactic information layers of the treebank using bclm 5. We provide nested mentions. Only the first, widest, layer is used in the NEMO2 paper. We invite you to take on this challenge! 6. Guidelines used for annotation are provided here. 7. Corpus was annotated by two native Hebrew speakers of academic education, and curated by the project manager. We provide the original annotations made by the annotators as well to promote work on learning with disagreements. 8. Annotation was performed using WebAnno (version 3.4.5) Legend for Files and Folder Structure ------------------------------------- 1. The two main data folders are ud and spmrl, corresponding to the relevant Hebrew Treebank corpus version. 2. Both contain a 'gold' folder (spmrl/gold, ud/gold) of gold curated annotations. 3. Each 'gold' folder contains files of the three input-output variants (morph, token-multi, token-single), for each of the treebank splits (train,dev,test). 4. Each 'gold' folder also contains a 'nested' subfolder (spmrl/nested, ud/nested), which contains all layers of nested mentions (the first layer is the layer used in the non-nested files, and in the NEMO2 paper [1]) 5. The 'ud' folder also contains an ab\_annotators folder. This folder contains the original annotations made by each annotator (named 'a', 'b'), including first-layer and nested annotatations. 6. \*\*UPDATE 2021-09-06\ 'ud' folder now contains a pilot\_annotations folder. This folder contains the original annotations made by each annotator in our two phase pilot (phase I - sentences 1-200 of dev; phase II - sentences 201-400 of dev). Basic Corpus Statistics ----------------------- Aligned Treenbank Versions -------------------------- The NEMO corpus matches the treebank version of bclm v.1.0.0. This version is based on the HTB UD v2.2 and the latest SPMRL HTB version. The changes contain (but might not be limited to the following): 1. Flagged and dropped duplicate and leaking sentences (between train and test). In addition to the sentences already removed in the bclm v1.0.0 HTB version, the following duplicate sentences were dropped as well (SPMRL sentence IDs): 5438, 5444, 5445, 5446, 5448, 5449, 5450, 5451, 5453, 5459 (in the bclm dataframes, these are marked in the 'duplicate\_sent\_id' column). To read the treebank (UD/SPMRL) in a way that matches the NEMO corpus, you can use the following: 2. The UD treebank contains many more duplicates. In this version: all sentences exist in both UD and SPMRL versions, and all sentences and tokens are aligned between UD and SPMRL. 3. Fixed numbers that were originally reversed. 4. Fixed mismatches between tokens and morphemes. 5. Added Binyan feature. 6. No individual morphemes or tokens were added or removed, only complete sentences. Evaluation ---------- An evaluation script is provided in the NEMO code repo along with evaluation instructions. s ##### [1] If you use the NEMO corpus in your research, please cite the NEMO2 paper: ##### [2] Please cite the Hebrew Treebank as well, described the following paper: ##### [3] The UD version of the Hebrew Treebank is described in:
[ "##### [1]\n\n\nIf you use the NEMO corpus in your research, please cite the NEMO2 paper:", "##### [2]\n\n\nPlease cite the Hebrew Treebank as well, described the following paper:", "##### [3]\n\n\nThe UD version of the Hebrew Treebank is described in:" ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-reuters-corpus #language-Hebrew #region-us \n", "##### [1]\n\n\nIf you use the NEMO corpus in your research, please cite the NEMO2 paper:", "##### [2]\n\n\nPlease cite the Hebrew Treebank as well, described the following paper:", "##### [3]\n\n\nThe UD version of the Hebrew Treebank is described in:" ]
[ 98, 22, 18, 16 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-reuters-corpus #language-Hebrew #region-us \n##### [1]\n\n\nIf you use the NEMO corpus in your research, please cite the NEMO2 paper:##### [2]\n\n\nPlease cite the Hebrew Treebank as well, described the following paper:##### [3]\n\n\nThe UD version of the Hebrew Treebank is described in:" ]
8b4000d7a1e7779bd0c7291f785a2160f95c03fb
# Dataset Card for ontonotes_english ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [CoNLL-2012 Shared Task](https://conll.cemantix.org/2012/data.html), [Author's page](https://cemantix.org/data/ontonotes.html) - **Repository:** - **Paper:** [Towards Robust Linguistic Analysis using OntoNotes](https://aclanthology.org/W13-3516/) - **Leaderboard:** [Papers With Code](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5) - **Point of Contact:** ### Dataset Summary This is preprocessed version of what I assume is OntoNotes v5.0. Instead of having sentences stored in files, files are unpacked and sentences are the rows now. Also, fields were renamed in order to match [conll2003](https://huggingface.co/datasets/conll2003). The source of data is from private repository, which in turn got data from another public repository, location of which is unknown :) Since data from all repositories had no license (creator of the private repository told me so), there should be no licensing issues. But bear in mind, I don't give any guarantees that this is real OntoNotes, and may differ as a result. ### Supported Tasks and Leaderboards - [Named Entity Recognition on Ontonotes v5 (English)](https://paperswithcode.com/sota/named-entity-recognition-ner-on-ontonotes-v5) - [Coreference Resolution on OntoNotes](https://paperswithcode.com/sota/coreference-resolution-on-ontonotes) - [Semantic Role Labeling on OntoNotes](https://paperswithcode.com/sota/semantic-role-labeling-on-ontonotes) ### Languages English ## Dataset Structure ### Data Instances ``` { 'tokens': ['Well', ',', 'the', 'Hundred', 'Regiments', 'Offensive', 'was', 'divided', 'into', 'three', 'phases', '.'], 'ner_tags': [0, 0, 29, 30, 30, 30, 0, 0, 0, 27, 0, 0] } ``` ### Data Fields - **`tokens`** (*`List[str]`*) : **`words`** in original dataset - **`ner_tags`** (*`List[ClassLabel]`*) : **`named_entities`** in original dataset. The BIO tags for named entities in the sentence. - tag set : `datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])` ### Data Splits _train_, _validation_, and _test_ ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information No license ### Citation Information ``` @inproceedings{pradhan-etal-2013-towards, title = "Towards Robust Linguistic Analysis using {O}nto{N}otes", author = {Pradhan, Sameer and Moschitti, Alessandro and Xue, Nianwen and Ng, Hwee Tou and Bj{\"o}rkelund, Anders and Uryupina, Olga and Zhang, Yuchen and Zhong, Zhi}, booktitle = "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", month = aug, year = "2013", address = "Sofia, Bulgaria", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/W13-3516", pages = "143--152", } ``` ### Contributions Thanks to the author of private repository, that uploaded this dataset.
SpeedOfMagic/ontonotes_english
[ "task_categories:token-classification", "task_ids:named-entity-recognition", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other", "language:en", "license:unknown", "region:us" ]
2022-06-28T16:34:30+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "ontonotes_english"}
2022-07-01T15:06:06+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other #language-English #license-unknown #region-us
# Dataset Card for ontonotes_english ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: CoNLL-2012 Shared Task, Author's page - Repository: - Paper: Towards Robust Linguistic Analysis using OntoNotes - Leaderboard: Papers With Code - Point of Contact: ### Dataset Summary This is preprocessed version of what I assume is OntoNotes v5.0. Instead of having sentences stored in files, files are unpacked and sentences are the rows now. Also, fields were renamed in order to match conll2003. The source of data is from private repository, which in turn got data from another public repository, location of which is unknown :) Since data from all repositories had no license (creator of the private repository told me so), there should be no licensing issues. But bear in mind, I don't give any guarantees that this is real OntoNotes, and may differ as a result. ### Supported Tasks and Leaderboards - Named Entity Recognition on Ontonotes v5 (English) - Coreference Resolution on OntoNotes - Semantic Role Labeling on OntoNotes ### Languages English ## Dataset Structure ### Data Instances ### Data Fields - 'tokens' (*'List[str]'*) : 'words' in original dataset - 'ner_tags' (*'List[ClassLabel]'*) : 'named_entities' in original dataset. The BIO tags for named entities in the sentence. - tag set : 'datasets.ClassLabel(num_classes=37, names=["O", "B-PERSON", "I-PERSON", "B-NORP", "I-NORP", "B-FAC", "I-FAC", "B-ORG", "I-ORG", "B-GPE", "I-GPE", "B-LOC", "I-LOC", "B-PRODUCT", "I-PRODUCT", "B-DATE", "I-DATE", "B-TIME", "I-TIME", "B-PERCENT", "I-PERCENT", "B-MONEY", "I-MONEY", "B-QUANTITY", "I-QUANTITY", "B-ORDINAL", "I-ORDINAL", "B-CARDINAL", "I-CARDINAL", "B-EVENT", "I-EVENT", "B-WORK_OF_ART", "I-WORK_OF_ART", "B-LAW", "I-LAW", "B-LANGUAGE", "I-LANGUAGE",])' ### Data Splits _train_, _validation_, and _test_ ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information No license ### Contributions Thanks to the author of private repository, that uploaded this dataset.
[ "# Dataset Card for ontonotes_english", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: CoNLL-2012 Shared Task, Author's page\n- Repository:\n- Paper: Towards Robust Linguistic Analysis using OntoNotes\n- Leaderboard: Papers With Code\n- Point of Contact:", "### Dataset Summary\n\nThis is preprocessed version of what I assume is OntoNotes v5.0.\n\nInstead of having sentences stored in files, files are unpacked and sentences are the rows now. Also, fields were renamed in order to match conll2003.\n\nThe source of data is from private repository, which in turn got data from another public repository, location of which is unknown :)\n\nSince data from all repositories had no license (creator of the private repository told me so), there should be no licensing issues. But bear in mind, I don't give any guarantees that this is real OntoNotes, and may differ as a result.", "### Supported Tasks and Leaderboards\n\n- Named Entity Recognition on Ontonotes v5 (English)\n- Coreference Resolution on OntoNotes\n- Semantic Role Labeling on OntoNotes", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'tokens' (*'List[str]'*) : 'words' in original dataset\n- 'ner_tags' (*'List[ClassLabel]'*) : 'named_entities' in original dataset. The BIO tags for named entities in the sentence. \n - tag set : 'datasets.ClassLabel(num_classes=37, names=[\"O\", \"B-PERSON\", \"I-PERSON\", \"B-NORP\", \"I-NORP\", \"B-FAC\", \"I-FAC\", \"B-ORG\", \"I-ORG\", \"B-GPE\", \"I-GPE\", \"B-LOC\", \"I-LOC\", \"B-PRODUCT\", \"I-PRODUCT\", \"B-DATE\", \"I-DATE\", \"B-TIME\", \"I-TIME\", \"B-PERCENT\", \"I-PERCENT\", \"B-MONEY\", \"I-MONEY\", \"B-QUANTITY\", \"I-QUANTITY\", \"B-ORDINAL\", \"I-ORDINAL\", \"B-CARDINAL\", \"I-CARDINAL\", \"B-EVENT\", \"I-EVENT\", \"B-WORK_OF_ART\", \"I-WORK_OF_ART\", \"B-LAW\", \"I-LAW\", \"B-LANGUAGE\", \"I-LANGUAGE\",])'", "### Data Splits\n\n_train_, _validation_, and _test_", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nNo license", "### Contributions\n\nThanks to the author of private repository, that uploaded this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other #language-English #license-unknown #region-us \n", "# Dataset Card for ontonotes_english", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: CoNLL-2012 Shared Task, Author's page\n- Repository:\n- Paper: Towards Robust Linguistic Analysis using OntoNotes\n- Leaderboard: Papers With Code\n- Point of Contact:", "### Dataset Summary\n\nThis is preprocessed version of what I assume is OntoNotes v5.0.\n\nInstead of having sentences stored in files, files are unpacked and sentences are the rows now. Also, fields were renamed in order to match conll2003.\n\nThe source of data is from private repository, which in turn got data from another public repository, location of which is unknown :)\n\nSince data from all repositories had no license (creator of the private repository told me so), there should be no licensing issues. But bear in mind, I don't give any guarantees that this is real OntoNotes, and may differ as a result.", "### Supported Tasks and Leaderboards\n\n- Named Entity Recognition on Ontonotes v5 (English)\n- Coreference Resolution on OntoNotes\n- Semantic Role Labeling on OntoNotes", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n- 'tokens' (*'List[str]'*) : 'words' in original dataset\n- 'ner_tags' (*'List[ClassLabel]'*) : 'named_entities' in original dataset. The BIO tags for named entities in the sentence. \n - tag set : 'datasets.ClassLabel(num_classes=37, names=[\"O\", \"B-PERSON\", \"I-PERSON\", \"B-NORP\", \"I-NORP\", \"B-FAC\", \"I-FAC\", \"B-ORG\", \"I-ORG\", \"B-GPE\", \"I-GPE\", \"B-LOC\", \"I-LOC\", \"B-PRODUCT\", \"I-PRODUCT\", \"B-DATE\", \"I-DATE\", \"B-TIME\", \"I-TIME\", \"B-PERCENT\", \"I-PERCENT\", \"B-MONEY\", \"I-MONEY\", \"B-QUANTITY\", \"I-QUANTITY\", \"B-ORDINAL\", \"I-ORDINAL\", \"B-CARDINAL\", \"I-CARDINAL\", \"B-EVENT\", \"I-EVENT\", \"B-WORK_OF_ART\", \"I-WORK_OF_ART\", \"B-LAW\", \"I-LAW\", \"B-LANGUAGE\", \"I-LANGUAGE\",])'", "### Data Splits\n\n_train_, _validation_, and _test_", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nNo license", "### Contributions\n\nThanks to the author of private repository, that uploaded this dataset." ]
[ 94, 11, 125, 56, 156, 49, 5, 6, 6, 325, 20, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 8, 22 ]
[ "passage: TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other #language-English #license-unknown #region-us \n# Dataset Card for ontonotes_english## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: CoNLL-2012 Shared Task, Author's page\n- Repository:\n- Paper: Towards Robust Linguistic Analysis using OntoNotes\n- Leaderboard: Papers With Code\n- Point of Contact:### Dataset Summary\n\nThis is preprocessed version of what I assume is OntoNotes v5.0.\n\nInstead of having sentences stored in files, files are unpacked and sentences are the rows now. Also, fields were renamed in order to match conll2003.\n\nThe source of data is from private repository, which in turn got data from another public repository, location of which is unknown :)\n\nSince data from all repositories had no license (creator of the private repository told me so), there should be no licensing issues. But bear in mind, I don't give any guarantees that this is real OntoNotes, and may differ as a result.### Supported Tasks and Leaderboards\n\n- Named Entity Recognition on Ontonotes v5 (English)\n- Coreference Resolution on OntoNotes\n- Semantic Role Labeling on OntoNotes### Languages\n\nEnglish## Dataset Structure" ]
29322bc987e6481fca61f75d3414f6b977807b04
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-25118781-8365116
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T16:53:14+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scientific_papers"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-pubmed", "metrics": ["bertscore", "meteor"], "dataset_name": "scientific_papers", "dataset_config": "pubmed", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}}
2022-06-28T20:19:34+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Blaise_g for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ 13, 82, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
2b38a740c66da19f7030b20580e05a709969d3c5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-25118781-8365117
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T16:53:17+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scientific_papers"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": ["bertscore", "meteor"], "dataset_name": "scientific_papers", "dataset_config": "pubmed", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}}
2022-06-28T20:17:57+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Blaise_g for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ 13, 83, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
991a3bbd972f620321ef1ba66609fa052aab761f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-c967fc98-8385124
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T16:56:32+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scientific_papers"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-pubmed", "metrics": ["bertscore", "meteor"], "dataset_name": "scientific_papers", "dataset_config": "pubmed", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}}
2022-06-28T20:22:31+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Blaise_g for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ 13, 82, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
18a2dffc39dd01bd15cd950a95981915e0772efe
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-c76b0e96-8395128
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T17:15:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scientific_papers"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-pubmed", "metrics": ["bertscore", "meteor"], "dataset_name": "scientific_papers", "dataset_config": "pubmed", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}}
2022-06-28T20:41:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Blaise-g for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise-g for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise-g for evaluating this model." ]
[ 13, 82, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model." ]
23906970078ab096c46b7cddcb6ba20ac530675a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-36bd0b51-8375120
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T17:38:03+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scientific_papers"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-pubmed", "metrics": ["bertscore", "meteor"], "dataset_name": "scientific_papers", "dataset_config": "pubmed", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}}
2022-06-28T21:01:55+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Blaise_g for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ 13, 82, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
19e1cd1bb14a0bdc47a90f0b7fc82577da378131
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise_g](https://huggingface.co/Blaise_g) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-36bd0b51-8375121
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T17:38:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scientific_papers"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": ["bertscore", "meteor"], "dataset_name": "scientific_papers", "dataset_config": "pubmed", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}}
2022-06-28T21:06:32+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Blaise_g for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
[ 13, 83, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise_g for evaluating this model." ]
a5da1dfaf0e9e1e459e57c058a07d6d74389513f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-c76b0e96-8395129
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T18:00:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["scientific_papers"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": ["bertscore", "meteor"], "dataset_name": "scientific_papers", "dataset_config": "pubmed", "dataset_split": "test", "col_mapping": {"text": "article", "target": "abstract"}}}
2022-06-28T21:24:00+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: scientific_papers To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Blaise-g for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise-g for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Blaise-g for evaluating this model." ]
[ 13, 83, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: scientific_papers\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Blaise-g for evaluating this model." ]
08216d26c67dbb3181ee26e11441dd66da80d59b
A novel dataset for benchmarking citation worthiness detection task in the American Legal Corpus. For more details about the dataset please refer to the original paper. ### Data Fields - **File Name**: the case file to which the sentence belongs. - **Sentence Number**: The sentence number as present in the document. - **Sentence**: The naturally occurring sentence in the text (after preprocessing/removing citation span.) - **Label**: Integer value of ‘0’ or ‘1’. ‘0’ represents that the sentence is not citation worthy whereas ‘1’ represents that the sentence is citation worthy. ### Data Splits |Split| #datapoints | |--|--| | Train-Small | 800,000 | | Validation-Small | 100,000 | | Test-Small | 100,000 | | Train-Medium | 8,000,000 | | Validation-Medium | 1,000,000 | | Test-Medium | 1,000,000 | | Train-Large | 142,588,927 | | Validation-Large | 17,934,940 | | Test-Large | 17,935,336 | ### Small Dataset ```python from datasets import load_dataset # get small dataset dataset = load_dataset("Vidhaan/LegalCitationWorthiness", "small") ``` ### Medium Dataset ```python from datasets import load_dataset # get medium dataset dataset = load_dataset("Vidhaan/LegalCitationWorthiness", "medium") ``` ### Large Dataset ```python from datasets import load_dataset # get large dataset dataset = load_dataset("Vidhaan/LegalCitationWorthiness", "large") ``` ## Citation Information ## Contributions Thanks to [@PritishWadhwa](https://github.com/PritishWadhwa), [@gitongithub](https://github.com/gitongithub), [@khatrimann](https://github.com/khatrimann), [@reshma](https://github.com/Reshma-Sheik), [@dhumketu](https://github.com/dhumketu) for adding this dataset
Vidhaan/LegalCitationWorthiness
[ "region:us" ]
2022-06-28T18:01:07+00:00
{}
2023-10-23T17:19:30+00:00
[]
[]
TAGS #region-us
A novel dataset for benchmarking citation worthiness detection task in the American Legal Corpus. For more details about the dataset please refer to the original paper. ### Data Fields * File Name: the case file to which the sentence belongs. * Sentence Number: The sentence number as present in the document. * Sentence: The naturally occurring sentence in the text (after preprocessing/removing citation span.) * Label: Integer value of ‘0’ or ‘1’. ‘0’ represents that the sentence is not citation worthy whereas ‘1’ represents that the sentence is citation worthy. ### Data Splits ### Small Dataset ### Medium Dataset ### Large Dataset Contributions ------------- Thanks to @PritishWadhwa, @gitongithub, @khatrimann, @reshma, @dhumketu for adding this dataset
[ "### Data Fields\n\n\n* File Name: the case file to which the sentence belongs.\n* Sentence Number: The sentence number as present in the document.\n* Sentence: The naturally occurring sentence in the text (after preprocessing/removing citation span.)\n* Label: Integer value of ‘0’ or ‘1’. ‘0’ represents that the sentence is not citation worthy whereas ‘1’ represents that the sentence is citation worthy.", "### Data Splits", "### Small Dataset", "### Medium Dataset", "### Large Dataset\n\n\nContributions\n-------------\n\n\nThanks to @PritishWadhwa, @gitongithub, @khatrimann, @reshma, @dhumketu for adding this dataset" ]
[ "TAGS\n#region-us \n", "### Data Fields\n\n\n* File Name: the case file to which the sentence belongs.\n* Sentence Number: The sentence number as present in the document.\n* Sentence: The naturally occurring sentence in the text (after preprocessing/removing citation span.)\n* Label: Integer value of ‘0’ or ‘1’. ‘0’ represents that the sentence is not citation worthy whereas ‘1’ represents that the sentence is citation worthy.", "### Data Splits", "### Small Dataset", "### Medium Dataset", "### Large Dataset\n\n\nContributions\n-------------\n\n\nThanks to @PritishWadhwa, @gitongithub, @khatrimann, @reshma, @dhumketu for adding this dataset" ]
[ 6, 105, 5, 5, 5, 44 ]
[ "passage: TAGS\n#region-us \n### Data Fields\n\n\n* File Name: the case file to which the sentence belongs.\n* Sentence Number: The sentence number as present in the document.\n* Sentence: The naturally occurring sentence in the text (after preprocessing/removing citation span.)\n* Label: Integer value of ‘0’ or ‘1’. ‘0’ represents that the sentence is not citation worthy whereas ‘1’ represents that the sentence is citation worthy.### Data Splits### Small Dataset### Medium Dataset### Large Dataset\n\n\nContributions\n-------------\n\n\nThanks to @PritishWadhwa, @gitongithub, @khatrimann, @reshma, @dhumketu for adding this dataset" ]
f6edac68ac34018903e004a6627bc6e7ae01a24f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: flax-community/t5-base-cnn-dm * Dataset: cnn_dailymail To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@gneubig](https://huggingface.co/gneubig) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-9d6be317-8445136
[ "autotrain", "evaluation", "region:us" ]
2022-06-28T19:16:46+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "flax-community/t5-base-cnn-dm", "metrics": ["bertscore", "comet"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-06-28T19:25:52+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: flax-community/t5-base-cnn-dm * Dataset: cnn_dailymail To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @gneubig for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: flax-community/t5-base-cnn-dm\n* Dataset: cnn_dailymail\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @gneubig for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: flax-community/t5-base-cnn-dm\n* Dataset: cnn_dailymail\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @gneubig for evaluating this model." ]
[ 13, 85, 16 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: flax-community/t5-base-cnn-dm\n* Dataset: cnn_dailymail\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @gneubig for evaluating this model." ]
11d8b38cbd90b0ae5bde304ae2d4ba5ace10af05
# Dataset Card for the Rosetta Code Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary > Rosetta Code is a programming chrestomathy site. The idea is to present solutions to the same task in as many different languages as possible, to demonstrate how languages are similar and different, and to aid a person with a grounding in one approach to a problem in learning another. Rosetta Code currently has 1,203 tasks, 389 draft tasks, and is aware of 883 languages, though we do not (and cannot) have solutions to every task in every language. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages ``` ['ALGOL 68', 'Arturo', 'AWK', 'F#', 'Factor', 'Go', 'J', 'jq', 'Julia', 'Lua', 'Mathematica/Wolfram Language', 'Perl', 'Phix', 'Picat', 'Python', 'Quackery', 'Raku', 'Ring', 'Sidef', 'Vlang', 'Wren', 'XPL0', '11l', '68000 Assembly', '8th', 'AArch64 Assembly', 'ABAP', 'ACL2', 'Action!', 'ActionScript', 'Ada', 'Aime', 'ALGOL W', 'Amazing Hopper', 'AntLang', 'Apex', 'APL', 'AppleScript', 'ARM Assembly', 'ATS', 'AutoHotkey', 'AutoIt', 'Avail', 'Babel', 'bash', 'BASIC', 'BASIC256', 'BQN', 'Bracmat', 'Burlesque', 'C', 'C#', 'C++', 'Ceylon', 'Clojure', 'COBOL', 'CoffeeScript', 'Common Lisp', 'Component Pascal', 'Crystal', 'D', 'Delphi', 'Dyalect', 'E', 'EasyLang', 'EchoLisp', 'ECL', 'Efene', 'EGL', 'Ela', 'Elena', 'Elixir', 'Elm', 'Emacs Lisp', 'Erlang', 'ERRE', 'Euphoria', 'Fantom', 'FBSL', 'Forth', 'Fortran', 'Free Pascal', 'FreeBASIC', 'Frink', 'FunL', 'Futhark', 'FutureBasic', 'Gambas', 'GAP', 'Genie', 'GLSL', 'Gosu', 'Groovy', 'Haskell', 'HicEst', 'Hy', 'i', 'Icon and Unicon', 'IDL', 'Idris', 'Inform 7', 'Ioke', 'Java', 'JavaScript', 'K', 'Klingphix', 'Klong', 'Kotlin', 'LabVIEW', 'Lambdatalk', 'Lang5', 'langur', 'Lasso', 'LFE', 'Liberty BASIC', 'LIL', 'Limbo', 'Lingo', 'Little', 'Logo', 'M2000 Interpreter', 'Maple', 'Mathcad', 'Mathematica / Wolfram Language', 'MATLAB / Octave', 'Maxima', 'Mercury', 'min', 'MiniScript', 'Nanoquery', 'Neko', 'Nemerle', 'NetRexx', 'NewLISP', 'Nial', 'Nim', 'Oberon-2', 'Objeck', 'Objective-C', 'OCaml', 'Oforth', 'Onyx', 'ooRexx', 'Order', 'OxygenBasic', 'Oz', 'PARI/GP', 'Pascal', 'Phixmonti', 'PHP', 'PicoLisp', 'Pike', 'PL/I', 'Pony', 'PostScript', 'PowerShell', 'Processing', 'Prolog', 'PureBasic', 'Q', 'QBasic', 'QB64', 'R', 'Racket', 'RapidQ', 'REBOL', 'Red', 'ReScript', 'Retro', 'REXX', 'RLaB', 'Ruby', 'Rust', 'S-lang', 'SASL', 'Scala', 'Scheme', 'Seed7', 'SenseTalk', 'SETL', 'Simula', '360 Assembly', '6502 Assembly', 'Slate', 'Smalltalk', 'Ol', 'SNOBOL4', 'Standard ML', 'Stata', 'Swift', 'Tailspin', 'Tcl', 'TI-89 BASIC', 'Trith', 'UNIX Shell', 'Ursa', 'Vala', 'VBA', 'VBScript', 'Visual Basic .NET', 'Wart', 'BaCon', 'Bash', 'Yabasic', 'Yacas', 'Batch File', 'Yorick', 'Z80 Assembly', 'BBC BASIC', 'Brat', 'zkl', 'zonnon', 'Zsh', 'ZX Spectrum Basic', 'Clipper/XBase++', 'ColdFusion', 'Dart', 'DataWeave', 'Dragon', 'FurryScript', 'Fōrmulæ', 'Harbour', 'hexiscript', 'Hoon', 'Janet', '0815', 'Jsish', 'Latitude', 'LiveCode', 'Aikido', 'AmigaE', 'MiniZinc', 'Asymptote', 'NGS', 'bc', 'Befunge', 'Plorth', 'Potion', 'Chef', 'Clipper', 'Relation', 'Robotic', 'dc', 'DCL', 'DWScript', 'Shen', 'SPL', 'SQL', 'Eiffel', 'Symsyn', 'Emojicode', 'TI-83 BASIC', 'Transd', 'Excel', 'Visual Basic', 'FALSE', 'WDTE', 'Fermat', 'XLISP', 'Zig', 'friendly interactive shell', 'Zoea', 'Zoea Visual', 'GEORGE', 'Haxe', 'HolyC', 'LSE64', 'M4', 'MAXScript', 'Metafont', 'МК-61/52', 'ML/I', 'Modula-2', 'Modula-3', 'MUMPS', 'NSIS', 'Openscad', 'Panda', 'PHL', 'Piet', 'Plain English', 'Pop11', 'ProDOS', '8051 Assembly', 'Python 3.x Long Form', 'Raven', 'ALGOL 60', 'Run BASIC', 'Sass/SCSS', 'App Inventor', 'smart BASIC', 'SNUSP', 'Arendelle', 'SSEM', 'Argile', 'Toka', 'TUSCRIPT', '4DOS Batch', '8080 Assembly', 'Vedit macro language', '8086 Assembly', 'Axe', 'Elisa', 'Verilog', 'Vim Script', 'x86 Assembly', 'Euler Math Toolbox', 'Acurity Architect', 'XSLT', 'BML', 'Agena', 'Boo', 'Brainf***', 'LLVM', 'FOCAL', 'Frege', 'ALGOL-M', 'ChucK', 'Arbre', 'Clean', 'Hare', 'MATLAB', 'Astro', 'Applesoft BASIC', 'OOC', 'Bc', 'Computer/zero Assembly', 'SAS', 'Axiom', 'B', 'Dao', 'Caché ObjectScript', 'CLU', 'Scilab', 'DBL', 'Commodore BASIC', 'Diego', 'Dc', 'BCPL', 'Alore', 'Blade', 'Déjà Vu', 'Octave', 'Cowgol', 'BlitzMax', 'Falcon', 'BlooP', 'SequenceL', 'Sinclair ZX81 BASIC', 'GW-BASIC', 'Lobster', 'C1R', 'Explore', 'Clarion', 'Locomotive Basic', 'GUISS', 'Clio', 'TXR', 'Ursala', 'CLIPS', 'Microsoft Small Basic', 'Golfscript', 'Beads', 'Coco', 'Little Man Computer', 'Chapel', 'Comal', 'Curry', 'GML', 'NewLisp', 'Coq', 'Gastona', 'uBasic/4tH', 'Pyret', 'Dhall', 'Plain TeX', 'Halon', 'Wortel', 'FormulaOne', 'Dafny', 'Ksh', 'Eero', 'Fan', 'Draco', 'DUP', 'Io', 'Metapost', 'Logtalk', 'Dylan', 'TI-83_BASIC', 'Sather', 'Rascal', 'SIMPOL', 'IS-BASIC', 'KonsolScript', 'Pari/Gp', 'Genyris', 'EDSAC order code', 'Egel', 'Joy', 'lang5', 'XProc', 'XQuery', 'POV-Ray', 'Kitten', 'Lisaac', 'LOLCODE', 'SVG', 'MANOOL', 'LSL', 'Moonscript', 'Fhidwfe', 'Inspired by Rascal', 'Fish', 'MIPS Assembly', 'Monte', 'FUZE BASIC', 'NS-HUBASIC', 'Qi', 'GDScript', 'Glee', 'SuperCollider', 'Verbexx', 'Huginn', 'I', 'Informix 4GL', 'Isabelle', 'KQL', 'lambdatalk', 'RPG', 'Lhogho', 'Lily', 'xTalk', 'Scratch', 'Self', 'MAD', 'RATFOR', 'OpenEdge/Progress', 'Xtend', 'Suneido', 'Mirah', 'mIRC Scripting Language', 'ContextFree', 'Tern', 'MMIX', 'AmigaBASIC', 'AurelBasic', 'TorqueScript', 'MontiLang', 'MOO', 'MoonScript', 'Unicon', 'fermat', 'q', 'Myrddin', 'உயிர்/Uyir', 'MySQL', 'newLISP', 'VHDL', 'Oberon', 'Wee Basic', 'OpenEdge ABL/Progress 4GL', 'X86 Assembly', 'XBS', 'KAP', 'Perl5i', 'Peloton', 'PL/M', 'PL/SQL', 'Pointless', 'Polyglot:PL/I and PL/M', 'ToffeeScript', 'TMG', 'TPP', 'Pure', 'Pure Data', 'Xidel', 'S-BASIC', 'Salmon', 'SheerPower 4GL', 'Sparkling', 'Spin', 'SQL PL', 'Transact-SQL', 'True BASIC', 'TSE SAL', 'Tiny BASIC', 'TypeScript', 'Uniface', 'Unison', 'UTFool', 'VAX Assembly', 'VTL-2', 'Wrapl', 'XBasic', 'Xojo', 'XSLT 1.0', 'XSLT 2.0', 'MACRO-10', 'ANSI Standard BASIC', 'UnixPipes', 'REALbasic', 'Golo', 'DM', 'X86-64 Assembly', 'GlovePIE', 'PowerBASIC', 'LotusScript', 'TIScript', 'Kite', 'V', 'Powershell', 'Vorpal', 'Never', 'Set lang', '80386 Assembly', 'Furor', 'Input conversion with Error Handling', 'Guile', 'ASIC', 'Autolisp', 'Agda', 'Swift Playground', 'Nascom BASIC', 'NetLogo', 'CFEngine', 'OASYS Assembler', 'Fennel', 'Object Pascal', 'Shale', 'GFA Basic', 'LDPL', 'Ezhil', 'SMEQL', 'tr', 'WinBatch', 'XPath 2.0', 'Quite BASIC', 'Gema', '6800 Assembly', 'Applescript', 'beeswax', 'gnuplot', 'ECMAScript', 'Snobol4', 'Blast', 'C/C++', 'Whitespace', 'Blue', 'C / C++', 'Apache Derby', 'Lychen', 'Oracle', 'Alternative version', 'PHP+SQLite', 'PILOT', 'PostgreSQL', 'PowerShell+SQLite', 'PureBasic+SQLite', 'Python+SQLite', 'SQLite', 'Tcl+SQLite', 'Transact-SQL (MSSQL)', 'Visual FoxPro', 'SmileBASIC', 'Datalog', 'SystemVerilog', 'Smart BASIC', 'Snobol', 'Terraform', 'ML', 'SQL/PostgreSQL', '4D', 'ArnoldC', 'ANSI BASIC', 'Delphi/Pascal', 'ooREXX', 'Dylan.NET', 'CMake', 'Lucid', 'XProfan', 'sed', 'Gnuplot', 'RPN (HP-15c)', 'Sed', 'JudoScript', 'ScriptBasic', 'Unix shell', 'Niue', 'Powerbuilder', 'C Shell', 'Zoomscript', 'MelonBasic', 'ScratchScript', 'SimpleCode', 'OASYS', 'HTML', 'tbas', 'LaTeX', 'Lilypond', 'MBS', 'B4X', 'Progress', 'SPARK / Ada', 'Arc', 'Icon', 'AutoHotkey_L', 'LSE', 'N/t/roff', 'Fexl', 'Ra', 'Koka', 'Maclisp', 'Mond', 'Nix', 'ZED', 'Inform 6', 'Visual Objects', 'Cind', 'm4', 'g-fu', 'pascal', 'Jinja', 'Mathprog', 'Rhope', 'Delphi and Pascal', 'Epoxy', 'SPARK', 'B4J', 'DIBOL-11', 'JavaFX Script', 'Pixilang', 'BASH (feat. sed & tr)', 'zig', 'Web 68', 'Shiny', 'Egison', 'OS X sha256sum', 'AsciiDots', 'FileMaker', 'Unlambda', 'eC', 'GLBasic', 'JOVIAL', 'haskell', 'Atari BASIC', 'ANTLR', 'Cubescript', 'OoRexx', 'WebAssembly', 'Woma', 'Intercal', 'Malbolge', 'LiveScript', 'Fancy', 'Detailed Description of Programming Task', 'Lean', 'GeneXus', 'CafeOBJ', 'TechBASIC', 'blz', 'MIRC Scripting Language', 'Oxygene', 'zsh', 'Make', 'Whenever', 'Sage', 'L++', 'Tosh', 'LC3 Assembly', 'SETL4', 'Pari/GP', 'OxygenBasic x86 Assembler', 'Pharo', 'Binary Lambda Calculus', 'Bob', 'bootBASIC', 'Turing', 'Ultimate++', 'Gabuzomeu', 'HQ9+', 'INTERCAL', 'Lisp', 'NASM', 'SPWN', 'Turbo Pascal', 'Nickle', 'SPAD', 'Mozart/Oz', 'Batch file', 'SAC', 'C and C++', 'vbscript', 'OPL', 'Wollok', 'Pascal / Delphi / Free Pascal', 'GNU make', 'Recursive', 'C3', 'Picolisp', 'Note 1', 'Note 2', 'Visual Prolog', 'ivy', 'k', 'clojure', 'Unix Shell', 'Basic09', 'S-Basic', 'FreePascal', 'Wolframalpha', 'c_sharp', 'LiveCode Builder', 'Heron', 'SPSS', 'LibreOffice Basic', 'PDP-11 Assembly', 'Solution with recursion', 'Lua/Torch', 'tsql', 'Transact SQL', 'X++', 'Xanadu', 'GDL', 'C_sharp', 'TutorialD', 'Glagol', 'Basic', 'Brace', 'Cixl', 'ELLA', 'Lox', 'Node.js', 'Generic', 'Hope', 'Snap!', 'TSQL', 'MathCortex', 'Mathmap', 'TI-83 BASIC, TI-89 BASIC', 'ZPL', 'LuaTeX', 'AmbientTalk', 'Alternate version to handle 64 and 128 bit integers.', 'Crack', 'Corescript', 'Fortress', 'GB BASIC', 'IWBASIC', 'RPL', 'DMS', 'dodo0', 'MIXAL', 'Occam', 'Morfa', 'Snabel', 'ObjectIcon', 'Panoramic', 'PeopleCode', 'Monicelli', 'gecho', 'Hack', 'JSON', 'Swym', 'ReasonML', 'make', 'TOML', 'WEB', 'SkookumScript', 'Batch', 'TransFORTH', 'Assembly', 'Iterative', 'LC-3', 'Quick Basic/QBASIC/PDS 7.1/VB-DOS', 'Turbo-Basic XL', 'GNU APL', 'OOCalc', 'QUACKASM', 'VB-DOS', 'Typescript', 'x86-64 Assembly', 'FORTRAN', 'Furryscript', 'Gridscript', 'Necromantus', 'HyperTalk', 'Biferno', 'AspectJ', 'SuperTalk', 'Rockstar', 'NMAKE.EXE', 'Opa', 'Algae', 'Anyways', 'Apricot', 'AutoLISP', 'Battlestar', 'Bird', 'Luck', 'Brlcad', 'C++/CLI', 'C2', 'Casio BASIC', 'Cat', 'Cduce', 'Clay', 'Cobra', 'Comefrom0x10', 'Creative Basic', 'Integer BASIC', 'DDNC', 'DeviousYarn', 'DIV Games Studio', 'Wisp', 'AMPL', 'Pare', 'PepsiScript', 'Installing Processing', 'Writing your first program', 'batari Basic', 'Jack', 'elastiC', 'TI-83 Hex Assembly', 'Extended BrainF***', '1C', 'PASM', 'Pict', 'ferite', 'Bori', 'RASEL', 'Echolisp', 'XPath', 'MLite', 'HPPPL', 'Gentee', 'JSE', 'Just Basic', 'Global Script', 'Nyquist', 'HLA', 'Teradata Stored Procedure', 'HTML5', 'Portugol', 'UBASIC', 'NOWUT', 'Inko', 'Jacquard Loom', 'JCL', 'Supernova', 'Small Basic', 'Kabap', 'Kaya', 'Kdf9 Usercode', 'Keg', 'KSI', 'Gecho', 'Gri', 'VBA Excel', 'Luna', 'MACRO-11', 'MINIL', 'Maude', 'MDL', 'Mosaic', 'Purity', 'MUF', 'MyDef', 'MyrtleScript', 'Mythryl', 'Neat', 'ThinBASIC', 'Nit', 'NLP++', 'Odin', 'OpenLisp', 'PDP-1 Assembly', 'Peylang', 'Pikachu', 'NESL', 'PIR', 'Plan', 'Programming Language', 'PROMAL', 'PSQL', 'Quill', 'xEec', 'RED', 'Risc-V', 'RTL/2', 'Sing', 'Sisal', 'SoneKing Assembly', 'SPARC Assembly', 'Swahili', 'Teco', 'Terra', 'TestML', 'Viua VM assembly', 'Whiley', 'Wolfram Language', 'X10', 'Quack', 'K4', 'XL', 'MyHDL', 'JAMES II/Rule-based Cellular Automata', 'APEX', 'QuickBASIC 4.5', 'BrightScript (for Roku)', 'Coconut', 'CSS', 'MapBasic', 'Gleam', 'AdvPL', 'Iptscrae', 'Kamailio Script', 'KL1', 'MEL', 'NATURAL', 'NewtonScript', 'PDP-8 Assembly', 'FRISC Assembly', 'Amstrad CPC Locomotive BASIC', 'Ruby with RSpec', 'php', 'Small', 'Lush', 'Squirrel', 'PL/pgSQL', 'XMIDAS', 'Rebol', 'embedded C for AVR MCU', 'FPr', 'Softbridge BASIC', 'StreamIt', 'jsish', 'JScript.NET', 'MS-DOS', 'Beeswax', 'eSQL', 'QL SuperBASIC', 'Rapira', 'Jq', 'scheme', 'oberon-2', '{{header|Vlang}', 'XUL', 'Soar', 'Befunge 93', 'Bash Shell', 'JacaScript', 'Xfractint', 'JoCaml', 'JotaCode', 'Atari Basic', 'Stretch 1', 'CFScript', 'Stretch 2', 'RPGIV', 'Shell', 'Felix', 'Flex', 'kotlin', 'Deluge', 'ksh', 'OCTAVE', 'vbScript', 'Javascript/NodeJS', 'Coffeescript', 'MS SmallBasic', 'Setl4', 'Overview', '1. Grid structure functions', '2. Calendar data functions', '3. Output configuration', 'WYLBUR', 'Mathematica/ Wolfram Language', 'Commodore Basic', 'Wolfram Language/Mathematica', 'Korn Shell', 'PARIGP', 'Metal', 'VBA (Visual Basic for Application)', 'Lolcode', 'mLite', 'z/Arch Assembler', "G'MIC", 'C# and Visual Basic .NET', 'Run Basic', 'FP', 'XEmacs Lisp', 'Mathematica//Wolfram Language', 'RPL/2', 'Ya', 'JavaScript + HTML', 'JavaScript + SVG', 'Quick BASIC', 'MatLab', 'Pascal and Object Pascal', 'Apache Ant', 'rust', 'VBA/Visual Basic', 'Go!', 'Lambda Prolog', 'Monkey'] ``` ## Dataset Structure ### Data Instances First row: ``` {'task_url': 'http://rosettacode.org/wiki/Ascending_primes', 'task_name': 'Ascending primes', 'task_description': "Generate and show all primes with strictly ascending decimal digits.\n\nAside: Try solving without peeking at existing solutions. I had a weird idea for generating\na prime sieve faster, which needless to say didn't pan out. The solution may be p(r)etty trivial\nbut generating them quickly is at least mildly interesting.\nTip: filtering all 7,027,260 primes below 123,456,789 probably won't kill you, but there is\nat least one significantly better and much faster way, needing a mere 511 odd/prime tests.\n\n\n\nSee also\n OEIS:A052015 - Primes with distinct digits in ascending order\n\n\nRelated\n\nPrimes with digits in nondecreasing order (infinite series allowing duplicate digits, whereas this isn't and doesn't)\nPandigital prime (whereas this is the smallest, with gaps in the used digits being permitted)\n\n", 'language_url': '#ALGOL_68', 'language_name': 'ALGOL 68'} ``` Code: ``` BEGIN # find all primes with strictly increasing digits # PR read "primes.incl.a68" PR # include prime utilities # PR read "rows.incl.a68" PR # include array utilities # [ 1 : 512 ]INT primes; # there will be at most 512 (2^9) primes # INT p count := 0; # number of primes found so far # FOR d1 FROM 0 TO 1 DO INT n1 = d1; FOR d2 FROM 0 TO 1 DO INT n2 = IF d2 = 1 THEN ( n1 * 10 ) + 2 ELSE n1 FI; FOR d3 FROM 0 TO 1 DO INT n3 = IF d3 = 1 THEN ( n2 * 10 ) + 3 ELSE n2 FI; FOR d4 FROM 0 TO 1 DO INT n4 = IF d4 = 1 THEN ( n3 * 10 ) + 4 ELSE n3 FI; FOR d5 FROM 0 TO 1 DO INT n5 = IF d5 = 1 THEN ( n4 * 10 ) + 5 ELSE n4 FI; FOR d6 FROM 0 TO 1 DO INT n6 = IF d6 = 1 THEN ( n5 * 10 ) + 6 ELSE n5 FI; FOR d7 FROM 0 TO 1 DO INT n7 = IF d7 = 1 THEN ( n6 * 10 ) + 7 ELSE n6 FI; FOR d8 FROM 0 TO 1 DO INT n8 = IF d8 = 1 THEN ( n7 * 10 ) + 8 ELSE n7 FI; FOR d9 FROM 0 TO 1 DO INT n9 = IF d9 = 1 THEN ( n8 * 10 ) + 9 ELSE n8 FI; IF n9 > 0 THEN IF is probably prime( n9 ) THEN # have a prime with strictly ascending digits # primes[ p count +:= 1 ] := n9 FI FI OD OD OD OD OD OD OD OD OD; QUICKSORT primes FROMELEMENT 1 TOELEMENT p count; # sort the primes # FOR i TO p count DO # display the primes # print( ( " ", whole( primes[ i ], -8 ) ) ); IF i MOD 10 = 0 THEN print( ( newline ) ) FI OD END ``` ### Data Fields ``` Dataset({ features: ['task_url', 'task_name', 'task_description', 'language_url', 'language_name', 'code'], num_rows: 79013 }) ``` ### Data Splits The dataset only contains one split, namely the "train" split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information To cite the Rosetta Code webiste you can use the following bibtex entry: ```json @misc{rosetta-code, author = "Rosetta Code", title = "Rosetta Code --- Rosetta Code{,} ", year = "2022", url = "https://rosettacode.org/w/index.php?title=Rosetta_Code&oldid=322370", note = "[Online; accessed 8-December-2022]" } ``` ### Contributions Thanks to [@christopher](https://twitter.com/christopher) for adding this dataset.
cakiki/rosetta-code
[ "language:code", "license:gfdl", "region:us" ]
2022-06-28T19:41:33+00:00
{"language": "code", "license": "gfdl"}
2023-09-24T09:17:35+00:00
[]
[ "code" ]
TAGS #language-code #license-gfdl #region-us
# Dataset Card for the Rosetta Code Dataset ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary > Rosetta Code is a programming chrestomathy site. The idea is to present solutions to the same task in as many different languages as possible, to demonstrate how languages are similar and different, and to aid a person with a grounding in one approach to a problem in learning another. Rosetta Code currently has 1,203 tasks, 389 draft tasks, and is aware of 883 languages, though we do not (and cannot) have solutions to every task in every language. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances First row: Code: ### Data Fields ### Data Splits The dataset only contains one split, namely the "train" split. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information To cite the Rosetta Code webiste you can use the following bibtex entry: ### Contributions Thanks to @christopher for adding this dataset.
[ "# Dataset Card for the Rosetta Code Dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n> Rosetta Code is a programming chrestomathy site. The idea is to present solutions to the same task in as many different languages as possible, to demonstrate how languages are similar and different, and to aid a person with a grounding in one approach to a problem in learning another. Rosetta Code currently has 1,203 tasks, 389 draft tasks, and is aware of 883 languages, though we do not (and cannot) have solutions to every task in every language.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nFirst row:\n\nCode:", "### Data Fields", "### Data Splits\n\nThe dataset only contains one split, namely the \"train\" split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\nTo cite the Rosetta Code webiste you can use the following bibtex entry:", "### Contributions\n\nThanks to @christopher for adding this dataset." ]
[ "TAGS\n#language-code #license-gfdl #region-us \n", "# Dataset Card for the Rosetta Code Dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n> Rosetta Code is a programming chrestomathy site. The idea is to present solutions to the same task in as many different languages as possible, to demonstrate how languages are similar and different, and to aid a person with a grounding in one approach to a problem in learning another. Rosetta Code currently has 1,203 tasks, 389 draft tasks, and is aware of 883 languages, though we do not (and cannot) have solutions to every task in every language.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances\n\nFirst row:\n\nCode:", "### Data Fields", "### Data Splits\n\nThe dataset only contains one split, namely the \"train\" split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\n\n\n\n\nTo cite the Rosetta Code webiste you can use the following bibtex entry:", "### Contributions\n\nThanks to @christopher for adding this dataset." ]
[ 17, 11, 125, 24, 111, 10, 4, 6, 12, 5, 23, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 24, 18 ]
[ "passage: TAGS\n#language-code #license-gfdl #region-us \n# Dataset Card for the Rosetta Code Dataset## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n> Rosetta Code is a programming chrestomathy site. The idea is to present solutions to the same task in as many different languages as possible, to demonstrate how languages are similar and different, and to aid a person with a grounding in one approach to a problem in learning another. Rosetta Code currently has 1,203 tasks, 389 draft tasks, and is aware of 883 languages, though we do not (and cannot) have solutions to every task in every language.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances\n\nFirst row:\n\nCode:### Data Fields### Data Splits\n\nThe dataset only contains one split, namely the \"train\" split.## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\n\n\n\n\nTo cite the Rosetta Code webiste you can use the following bibtex entry:### Contributions\n\nThanks to @christopher for adding this dataset." ]
1cf33ab60b1855c636eed32ca381dbac55116571
# Dataset Card for OpenQuestionType ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://shuyangcao.github.io/projects/ontology_open_ended_question/](https://shuyangcao.github.io/projects/ontology_open_ended_question/) - **Repository:** [https://github.com/ShuyangCao/open-ended_question_ontology](https://github.com/ShuyangCao/open-ended_question_ontology) - **Paper:** [https://aclanthology.org/2021.acl-long.502/](https://aclanthology.org/2021.acl-long.502/) - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary Question types annotated on open-ended questions. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances An example looks as follows. ``` { "id": "123", "question": "A test question?", "annotator1": ["verification", None], "annotator2": ["concept", None], "resolve_type": "verification" } ``` ### Data Fields - `id`: a `string` feature. - `question`: a `string` feature. - `annotator1`: a sequence feature containing two elements. The first one is the most confident label by the first annotator and the second one is the second-most confident label by the first annotator. - `annotator2`: a sequence feature containing two elements. The first one is the most confident label by the second annotator and the second one is the second-most confident label by the second annotator. - `resolve_type`: a `string` feature which is the final label after resolving disagreement. ### Data Splits - train: 3716 - valid: 580 - test: 660 ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Yahoo Answer and Reddit users. ### Personal and Sensitive Information None. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information CC BY 4.0 ### Citation Information ``` @inproceedings{cao-wang-2021-controllable, title = "Controllable Open-ended Question Generation with A New Question Type Ontology", author = "Cao, Shuyang and Wang, Lu", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.502", doi = "10.18653/v1/2021.acl-long.502", pages = "6424--6439", abstract = "We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences. We first define a new question type ontology which differentiates the nuanced nature of questions better than widely used question words. A new dataset with 4,959 questions is labeled based on the new ontology. We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question. Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity. Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics. Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality. Finally, our model variants with templates can produce questions with enhanced controllability and diversity.", } ```
launch/open_question_type
[ "task_categories:text-classification", "annotations_creators:expert-generated", "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "region:us" ]
2022-06-28T19:55:58+00:00
{"annotations_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "OpenQuestionType"}
2022-11-09T01:58:10+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #annotations_creators-expert-generated #multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
# Dataset Card for OpenQuestionType ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary Question types annotated on open-ended questions. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances An example looks as follows. ### Data Fields - 'id': a 'string' feature. - 'question': a 'string' feature. - 'annotator1': a sequence feature containing two elements. The first one is the most confident label by the first annotator and the second one is the second-most confident label by the first annotator. - 'annotator2': a sequence feature containing two elements. The first one is the most confident label by the second annotator and the second one is the second-most confident label by the second annotator. - 'resolve_type': a 'string' feature which is the final label after resolving disagreement. ### Data Splits - train: 3716 - valid: 580 - test: 660 ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Yahoo Answer and Reddit users. ### Personal and Sensitive Information None. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information CC BY 4.0
[ "# Dataset Card for OpenQuestionType", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nQuestion types annotated on open-ended questions.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nAn example looks as follows.", "### Data Fields\n\n- 'id': a 'string' feature.\n- 'question': a 'string' feature.\n- 'annotator1': a sequence feature containing two elements. The first one is the most confident label by the first annotator and the second one is the second-most confident label by the first annotator.\n- 'annotator2': a sequence feature containing two elements. The first one is the most confident label by the second annotator and the second one is the second-most confident label by the second annotator.\n- 'resolve_type': a 'string' feature which is the final label after resolving disagreement.", "### Data Splits\n\n- train: 3716\n- valid: 580\n- test: 660", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nYahoo Answer and Reddit users.", "### Personal and Sensitive Information\n\nNone.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY 4.0" ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for OpenQuestionType", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nQuestion types annotated on open-ended questions.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nAn example looks as follows.", "### Data Fields\n\n- 'id': a 'string' feature.\n- 'question': a 'string' feature.\n- 'annotator1': a sequence feature containing two elements. The first one is the most confident label by the first annotator and the second one is the second-most confident label by the first annotator.\n- 'annotator2': a sequence feature containing two elements. The first one is the most confident label by the second annotator and the second one is the second-most confident label by the second annotator.\n- 'resolve_type': a 'string' feature which is the final label after resolving disagreement.", "### Data Splits\n\n- train: 3716\n- valid: 580\n- test: 660", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n\nYahoo Answer and Reddit users.", "### Personal and Sensitive Information\n\nNone.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC BY 4.0" ]
[ 51, 9, 125, 27, 18, 10, 5, 6, 13, 154, 20, 5, 7, 4, 10, 16, 11, 8, 7, 8, 7, 5, 6, 9 ]
[ "passage: TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n# Dataset Card for OpenQuestionType## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nQuestion types annotated on open-ended questions.### Supported Tasks and Leaderboards### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nAn example looks as follows.### Data Fields\n\n- 'id': a 'string' feature.\n- 'question': a 'string' feature.\n- 'annotator1': a sequence feature containing two elements. The first one is the most confident label by the first annotator and the second one is the second-most confident label by the first annotator.\n- 'annotator2': a sequence feature containing two elements. The first one is the most confident label by the second annotator and the second one is the second-most confident label by the second annotator.\n- 'resolve_type': a 'string' feature which is the final label after resolving disagreement.### Data Splits\n\n- train: 3716\n- valid: 580\n- test: 660## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?\n\nYahoo Answer and Reddit users.### Personal and Sensitive Information\n\nNone.## Considerations for Using the Data### Social Impact of Dataset" ]