sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
4d25d534c42edc1a204468be223ab359828c8e29 | # AutoTrain Dataset for project: fine_tune_table_tm2
## Dataset Description
This dataset has been automatically processed by AutoTrain for project fine_tune_table_tm2.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "List all PO headers with a valid vendor record in database",
"target": "select * from RETAILBUYER_POHEADER P inner join RETAILBUYER_VENDOR V\non P.VENDOR_ID = V.VENDOR_ID"
},
{
"text": "List all details of PO headers which have a vendor in vendor table",
"target": "select * from RETAILBUYER_POHEADER P inner join RETAILBUYER_VENDOR V\non P.VENDOR_ID = V.VENDOR_ID"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 32 |
| valid | 17 |
| Aman6917/autotrain-data-fine_tune_table_tm2 | [
"task_categories:summarization",
"region:us"
] | 2023-01-02T11:48:55+00:00 | {"task_categories": ["summarization"]} | 2023-01-03T12:38:25+00:00 | [] | [] | TAGS
#task_categories-summarization #region-us
| AutoTrain Dataset for project: fine\_tune\_table\_tm2
=====================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project fine\_tune\_table\_tm2.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-summarization #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
78ed19793e07692925fba02292dc6d58b2d46404 |
This dataset consists of three CSV files, namely: 'cs.csv', 'ds.csv', and 'p.csv'.
Each CSV file includes the data for the questions asked on a Stack Exchange (SE) question-answering community, from the creation of the community until May 2021.
- 'cs.csv' --> [Computer Science SE](https://cs.stackexchange.com/)
- 'ds.csv' --> [Data Science SE](https://datascience.stackexchange.com/)
- 'p.csv' --> [Political Science SE](https://politics.stackexchange.com/)
Each CSV file has the following columns:
- `id`: the question id
- `title`: the title of the question
- `body`: the body or text of the question
- `tags`: the list of tags assigned to the question
- `label`: a label indicating whether the question is resolved or not (0: not resolved; 1: resolved)
The dataset was used in these researches:
- [A deep learning-based approach for identifying unresolved questions on Stack Exchange Q&A communities through graph-based communication modelling](https://doi.org/10.1007/s41060-023-00454-0)
- [Survival analysis for user disengagement prediction: question-and-answering communities’ case](https://doi.org/10.1007/s13278-022-00914-8) | habedi/stack-exchange-dataset | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"region:us"
] | 2023-01-02T12:13:24+00:00 | {"language": ["en"], "license": "cc", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering"], "pretty_name": "Stack Exchange -- Question Dataset"} | 2023-11-29T06:48:06+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-cc #region-us
|
This dataset consists of three CSV files, namely: 'URL', 'URL', and 'p.csv'.
Each CSV file includes the data for the questions asked on a Stack Exchange (SE) question-answering community, from the creation of the community until May 2021.
- 'URL' --> Computer Science SE
- 'URL' --> Data Science SE
- 'p.csv' --> Political Science SE
Each CSV file has the following columns:
- 'id': the question id
- 'title': the title of the question
- 'body': the body or text of the question
- 'tags': the list of tags assigned to the question
- 'label': a label indicating whether the question is resolved or not (0: not resolved; 1: resolved)
The dataset was used in these researches:
- A deep learning-based approach for identifying unresolved questions on Stack Exchange Q&A communities through graph-based communication modelling
- Survival analysis for user disengagement prediction: question-and-answering communities’ case | [] | [
"TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-cc #region-us \n"
] |
475c56aafb9b1b0e3c5b197ee0990d0511861542 | # Dataset Card for "code-review-instruct-critique-revision"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Dahoas/code-review-instruct-critique-revision | [
"region:us"
] | 2023-01-02T12:21:35+00:00 | {"dataset_info": {"features": [{"name": "body", "dtype": "string"}, {"name": "answer", "struct": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "meta_data", "struct": [{"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "ParentId", "dtype": "string"}, {"name": "Score", "dtype": "string"}]}]}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "meta_data", "struct": [{"name": "AcceptedAnswerId", "dtype": "string"}, {"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "Tags", "sequence": "string"}, {"name": "Title", "dtype": "string"}]}, {"name": "question_id", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 322516541, "num_examples": 32800}], "download_size": 127604867, "dataset_size": 322516541}} | 2023-01-08T15:02:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "code-review-instruct-critique-revision"
More Information needed | [
"# Dataset Card for \"code-review-instruct-critique-revision\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"code-review-instruct-critique-revision\"\n\nMore Information needed"
] |
12399f1eb09fbf1305d580eb9bbebdc5e6d0cb01 |
# Dataset Card for echr_rational
### Dataset Summary
[Deconfounding Legal Judgment Prediction for European Court of Human
Rights Cases Towards Better Alignment with Experts](https://arxiv.org/pdf/2210.13836.pdf)
This work demonstrates that Legal Judgement Prediction systems without expert-informed adjustments can be vulnerable to shallow, distracting surface signals that arise from corpus construction, case distribution, and confounding factors. To mitigate this, we use domain expertise to strategically identify statistically predictive but legally irrelevant information. We adopt adversarial training to prevent the system from relying on it. We evaluate our deconfounded models by employing interpretability techniques and comparing to expert annotations. Quantitative experiments and qualitative analysis show that our deconfounded model consistently aligns better with expert rationales than baselines trained for prediction only. We further contribute a set of reference expert annotations to the validation and testing partitions of an existing benchmark dataset of European Court of Human Rights cases
### Languages
English
# Citation Information
@article{santosh2022deconfounding,
title={Deconfounding Legal Judgment Prediction for European Court of Human Rights Cases Towards Better Alignment with Experts},
author={Santosh, TYS and Xu, Shanshan and Ichim, Oana and Grabmair, Matthias},
journal={arXiv preprint arXiv:2210.13836},
year={2022}
}
| TUMLegalTech/echr_rational | [
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"language:en",
"license:afl-3.0",
"arxiv:2210.13836",
"region:us"
] | 2023-01-02T13:13:23+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": "afl-3.0", "multilinguality": ["monolingual"], "size_categories": [50]} | 2023-01-06T14:29:05+00:00 | [
"2210.13836"
] | [
"en"
] | TAGS
#annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #language-English #license-afl-3.0 #arxiv-2210.13836 #region-us
|
# Dataset Card for echr_rational
### Dataset Summary
Deconfounding Legal Judgment Prediction for European Court of Human
Rights Cases Towards Better Alignment with Experts
This work demonstrates that Legal Judgement Prediction systems without expert-informed adjustments can be vulnerable to shallow, distracting surface signals that arise from corpus construction, case distribution, and confounding factors. To mitigate this, we use domain expertise to strategically identify statistically predictive but legally irrelevant information. We adopt adversarial training to prevent the system from relying on it. We evaluate our deconfounded models by employing interpretability techniques and comparing to expert annotations. Quantitative experiments and qualitative analysis show that our deconfounded model consistently aligns better with expert rationales than baselines trained for prediction only. We further contribute a set of reference expert annotations to the validation and testing partitions of an existing benchmark dataset of European Court of Human Rights cases
### Languages
English
@article{santosh2022deconfounding,
title={Deconfounding Legal Judgment Prediction for European Court of Human Rights Cases Towards Better Alignment with Experts},
author={Santosh, TYS and Xu, Shanshan and Ichim, Oana and Grabmair, Matthias},
journal={arXiv preprint arXiv:2210.13836},
year={2022}
}
| [
"# Dataset Card for echr_rational",
"### Dataset Summary\nDeconfounding Legal Judgment Prediction for European Court of Human\nRights Cases Towards Better Alignment with Experts\n\nThis work demonstrates that Legal Judgement Prediction systems without expert-informed adjustments can be vulnerable to shallow, distracting surface signals that arise from corpus construction, case distribution, and confounding factors. To mitigate this, we use domain expertise to strategically identify statistically predictive but legally irrelevant information. We adopt adversarial training to prevent the system from relying on it. We evaluate our deconfounded models by employing interpretability techniques and comparing to expert annotations. Quantitative experiments and qualitative analysis show that our deconfounded model consistently aligns better with expert rationales than baselines trained for prediction only. We further contribute a set of reference expert annotations to the validation and testing partitions of an existing benchmark dataset of European Court of Human Rights cases",
"### Languages\nEnglish\n\n\n\n\n\n @article{santosh2022deconfounding,\n title={Deconfounding Legal Judgment Prediction for European Court of Human Rights Cases Towards Better Alignment with Experts},\n author={Santosh, TYS and Xu, Shanshan and Ichim, Oana and Grabmair, Matthias},\n journal={arXiv preprint arXiv:2210.13836},\n year={2022}\n }"
] | [
"TAGS\n#annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #language-English #license-afl-3.0 #arxiv-2210.13836 #region-us \n",
"# Dataset Card for echr_rational",
"### Dataset Summary\nDeconfounding Legal Judgment Prediction for European Court of Human\nRights Cases Towards Better Alignment with Experts\n\nThis work demonstrates that Legal Judgement Prediction systems without expert-informed adjustments can be vulnerable to shallow, distracting surface signals that arise from corpus construction, case distribution, and confounding factors. To mitigate this, we use domain expertise to strategically identify statistically predictive but legally irrelevant information. We adopt adversarial training to prevent the system from relying on it. We evaluate our deconfounded models by employing interpretability techniques and comparing to expert annotations. Quantitative experiments and qualitative analysis show that our deconfounded model consistently aligns better with expert rationales than baselines trained for prediction only. We further contribute a set of reference expert annotations to the validation and testing partitions of an existing benchmark dataset of European Court of Human Rights cases",
"### Languages\nEnglish\n\n\n\n\n\n @article{santosh2022deconfounding,\n title={Deconfounding Legal Judgment Prediction for European Court of Human Rights Cases Towards Better Alignment with Experts},\n author={Santosh, TYS and Xu, Shanshan and Ichim, Oana and Grabmair, Matthias},\n journal={arXiv preprint arXiv:2210.13836},\n year={2022}\n }"
] |
cf4d879b7ffe35b240659a5b541484c3ec0da6ba | Dataset with Prolog code / query pairs and execution results. | alex43219/prolog-dataset-full | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:code",
"region:us"
] | 2023-01-02T13:30:20+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["code"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "Prolog dataset", "tags": []} | 2023-01-02T16:43:04+00:00 | [] | [
"code"
] | TAGS
#task_categories-other #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #language-code #region-us
| Dataset with Prolog code / query pairs and execution results. | [] | [
"TAGS\n#task_categories-other #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #language-code #region-us \n"
] |
d05d1b5797d4e1d020ab972021f0df669e374e92 |

### Description
Extended dataset infered by the name entity recognition model [en_ner_prompting](https://huggingface.co/teo-sanchez/en_ner_prompting). This model has been trained on hand-annotated prompts from [poloclub/diffusiondb](https://huggingface.co/datasets/poloclub/diffusiondb).
This dataset is hence infered by this model and can comprise mistakes, especially on certain categories (cf. model card).
The entities comprise 7 main categories and 11 subcategories for a total of 16 categories, extracted from a topic analysis made with [BERTopic](https://maartengr.github.io/BERTopic/index.html).
The topic analysis can be explored [the following visualization](https://teo-sanchez.github.io/projects/prompting_map.html).
```
├── medium/
│ ├── photography
│ ├── painting
│ ├── rendering
│ └── illustration
├── influence/
│ ├── artist
│ ├── genre
│ ├── artwork
│ └── repository
├── light
├── color
├── composition
├── detail
└── context/
├── era
├── weather
└── emotion
```
### Label Scheme
<details>
<summary>View label scheme (16 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `color`, `composition`, `context/emotion`, `context/era`, `context/weather`, `detail`, `influence/artist`, `influence/artwork`, `influence/genre`, `influence/repository`, `light`, `medium/illustration`, `medium/painting`, `medium/photography`, `medium/rendering`, `subject` |
</details> | teo-sanchez/diffusiondb_ner | [
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100M<n<1G",
"source_datasets:poloclub/diffusiondb",
"language:en",
"license:cc-by-3.0",
"stable diffusion",
"prompt engineering",
"prompts",
"research paper",
"region:us"
] | 2023-01-02T13:35:10+00:00 | {"language_creators": ["found"], "language": ["en"], "license": ["cc-by-3.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1G"], "source_datasets": ["poloclub/diffusiondb"], "pretty_name": "NER-DiffusionDB", "layout": "default", "title": "Name Entity Recognition of DiffusionDB", "nav_order": 1, "has_children": false, "tags": ["stable diffusion", "prompt engineering", "prompts", "research paper"]} | 2023-01-02T14:24:35+00:00 | [] | [
"en"
] | TAGS
#language_creators-found #multilinguality-monolingual #size_categories-100M<n<1G #source_datasets-poloclub/diffusiondb #language-English #license-cc-by-3.0 #stable diffusion #prompt engineering #prompts #research paper #region-us
| .
The entities comprise 7 main categories and 11 subcategories for a total of 16 categories, extracted from a topic analysis made with BERTopic.
The topic analysis can be explored the following visualization.
### Label Scheme
View label scheme (16 labels for 1 components)
| [
"### Description\n\n\nExtended dataset infered by the name entity recognition model en\\_ner\\_prompting. This model has been trained on hand-annotated prompts from poloclub/diffusiondb.\nThis dataset is hence infered by this model and can comprise mistakes, especially on certain categories (cf. model card).\n\n\nThe entities comprise 7 main categories and 11 subcategories for a total of 16 categories, extracted from a topic analysis made with BERTopic.\nThe topic analysis can be explored the following visualization.",
"### Label Scheme\n\n\n\nView label scheme (16 labels for 1 components)"
] | [
"TAGS\n#language_creators-found #multilinguality-monolingual #size_categories-100M<n<1G #source_datasets-poloclub/diffusiondb #language-English #license-cc-by-3.0 #stable diffusion #prompt engineering #prompts #research paper #region-us \n",
"### Description\n\n\nExtended dataset infered by the name entity recognition model en\\_ner\\_prompting. This model has been trained on hand-annotated prompts from poloclub/diffusiondb.\nThis dataset is hence infered by this model and can comprise mistakes, especially on certain categories (cf. model card).\n\n\nThe entities comprise 7 main categories and 11 subcategories for a total of 16 categories, extracted from a topic analysis made with BERTopic.\nThe topic analysis can be explored the following visualization.",
"### Label Scheme\n\n\n\nView label scheme (16 labels for 1 components)"
] |
ddb0790ab02248267a37192dcbc741258601d758 |
This dataset was created in https://openreview.net/pdf?id=uDlkiCI5N7Y
The original source is here: https://drive.google.com/drive/folders/1VDnwRhmguvhKUCZ0_nv54RMGgqfYHGfz
Many thanks to Stefan Larson! | jordyvl/RVL-CDIP-N | [
"license:cc-by-3.0",
"region:us"
] | 2023-01-02T14:13:33+00:00 | {"license": "cc-by-3.0", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "budget", "1": "email", "2": "form", "3": "handwritten", "4": "invoice", "5": "letter", "6": "memo", "7": "news_article", "8": "questionnaire", "9": "resume", "10": "scientific_publication", "11": "specification"}}}}], "splits": [{"name": "test", "num_bytes": 2272995060.864, "num_examples": 1002}], "download_size": 544832160, "dataset_size": 2272995060.864}} | 2023-01-02T14:25:47+00:00 | [] | [] | TAGS
#license-cc-by-3.0 #region-us
|
This dataset was created in URL
The original source is here: URL
Many thanks to Stefan Larson! | [] | [
"TAGS\n#license-cc-by-3.0 #region-us \n"
] |
d5876b14c70bd456709f78705de9bca920c87dcf | Dataset with Prolog code / query pairs and execution results. | alex43219/prolog-dataset-small-balanced | [
"task_categories:other",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:code",
"region:us"
] | 2023-01-02T14:16:52+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["code"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["other"], "task_ids": [], "pretty_name": "Prolog dataset", "tags": []} | 2023-01-02T16:42:10+00:00 | [] | [
"code"
] | TAGS
#task_categories-other #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #language-code #region-us
| Dataset with Prolog code / query pairs and execution results. | [] | [
"TAGS\n#task_categories-other #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #language-code #region-us \n"
] |
5da7e3c8b920a586b8c36eecba4aaa0152a59a52 |
# Dataset Card for [financial-reports-sec]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Configurations](#dataset-configurations)
- [Usage](#usage)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Summary Statistics](#dataset-summary-statistics)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [References](#references)
- [Citation Information](#citation-information)
## Dataset Description
- **Point of Contact: Aman Khan**
### Dataset Summary
The dataset contains the annual report of US public firms filing with the SEC EDGAR system from 1993-2020. Each annual report (**10K filing**) is broken into 20 sections. Each section is split into individual sentences. Sentiment labels are provided on a **per filing basis** from the market reaction around the filing date for 3 different time windows _[t-1, t+1]_, _[t-1, t+5]_ and _[t-1, t+30]_. Additional metadata for each filing is included in the dataset.
### Dataset Configurations
**Four** configurations are available:
- _**large_lite**_:
- Contains only the basic features needed. Extra metadata is ommitted.
- Features List:
- **cik**
- **sentence**
- **section**
- **labels**
- **filingDate**
- **docID**
- **sentenceID**
- **sentenceCount**
- _**large_full**_:
- All features are included.
- Features List (excluding those already in the lite verison above):
- **name**
- **tickers**
- **exchanges**
- **entityType**
- **sic**
- **stateOfIncorporation**
- **tickerCount**
- **acceptanceDateTime**
- **form**
- **reportDate**
- **returns**
- _**small_lite**_:
- Same as _**large_lite**_ version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.
- _**small_full**_:
- Same as _**large_full**_ version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.
### Usage
```python
import datasets
# Load the lite configuration of the dataset
raw_dataset = datasets.load_dataset("JanosAudran/financial-reports-sec", "large_lite")
# Load a specific split
raw_dataset = datasets.load_dataset("JanosAudran/financial-reports-sec", "small_full", split="train")
```
### Supported Tasks
The tasks the dataset can be used directly for includes:
- _Masked Language Modelling_
- A model like BERT can be fine-tuned on this corpus of financial text.
- _Sentiment Analysis_
- For each annual report a label ["positive", "negative"] is provided based on the market reaction around the filing date (refer to [Annotations](#annotations)).
- _Next Sentence Prediction/Sentence Order Prediction_
- Sentences extracted from the filings are in their original order and as such the dataset can be adapted very easily for either of these tasks.
### Languages
All sentences are in English.
## Dataset Structure
### Data Instances
Refer to dataset preview.
### Data Fields
**Feature Name**
- Description
- Data type
- Example/Structure
**cik**
- 10 digit identifier used by SEC for a firm.
- _string_
- '0000001750'
**sentence**
- A single sentence from the 10-K filing.
- _string_
- 'The finance agreement is secured by a first priority security interest in all insurance policies, all unearned premium, return premiums, dividend payments and loss payments thereof.'
**section**
- The section of the 10-K filing the sentence is located.
- _ClassLabel_
- ```python
ClassLabel(names=['section_1', 'section_10', 'section_11', 'section_12', 'section_13', 'section_14', 'section_15', 'section_1A', 'section_1B', 'section_2','section_3', 'section_4', 'section_5', 'section_6', 'section_7', 'section_7A','section_8', 'section_9', 'section_9A', 'section_9B'], id=None)
```
**labels**
- The sentiment label for the entire filing (_**positve**_ or _**negative**_) based on different time windows.
- _Dict of ClassLables_
- ```python
{
'1d': ClassLabel(names=['positive', 'negative'], id=None),
'5d': ClassLabel(names=['positive', 'negative'], id=None),
'30d': ClassLabel(names=['positive', 'negative'], id=None)
}
```
**filingDate**
- The date the 10-K report was filed with the SEC.
- _string_
- '2021-03-10'
**docID**
- Unique ID for identifying the exact 10-K filing. Unique across all configs and splits. Can be used to identify the document from which the sentence came from.
- _string_
- '0000001750_10-K_2020'
**sentenceID**
- Unique ID for identifying the exact sentence. Unique across all configs and splits.
- _string_
- '0000001750_10-K_2020_section_1_100'
**sentenceCount**
- Integer identiying the running sequence for the sentence. Unique **only** for a given config and split.
- _string_
- 123
**name**
- The name of the filing entity
- _string_
- 'Investar Holding Corp'
**tickers**
- List of ticker symbols for the filing entity.
- _List of strings_
- ['ISTR']
**exchanges**
- List of exchanges for the filing entity.
- _List of strings_
- ['Nasdaq']
**entityType**
- The type of entity as identified in the 10-K filing.
- _string_
- 'operating'
**sic**
- Four digit SIC code for the filing entity.
- _string_
- '6022'
**stateOfIncorporation**
- Two character code for the state of incorporation for the filing entity.
- _string_
- 'LA'
**tickerCount**
- _**Internal use**_. Count of ticker symbols. Always 1.
- _int_
- 1
**acceptanceDateTime**
- The full timestamp of when the filing was accepted into the SEC EDGAR system.
- _string_
- '2021-03-10T14:26:11.000Z'
**form**
- The type of filing. Always 10-K in the dataset.
- _string_
- '10-K'
**reportDate**
- The last date in the fiscal year for which the entity is filing the report.
- _string_
- '2020-12-31'
**returns**
- _**Internal use**_. The prices and timestamps used to calculate the sentiment labels.
- _Dict_
- ```python
{'1d': {
'closePriceEndDate': 21.45746421813965,
'closePriceStartDate': 20.64960479736328,
'endDate': '2021-03-11T00:00:00-05:00',
'startDate': '2021-03-09T00:00:00-05:00',
'ret': 0.03912226855754852
},
'5d': {
'closePriceEndDate': 21.743167877197266,
'closePriceStartDate': 20.64960479736328,
'endDate': '2021-03-15T00:00:00-04:00',
'startDate': '2021-03-09T00:00:00-05:00',
'ret': 0.052958063781261444
},
'30d': {
'closePriceEndDate': 20.63919448852539,
'closePriceStartDate': 20.64960479736328,
'endDate': '2021-04-09T00:00:00-04:00',
'startDate': '2021-03-09T00:00:00-05:00',
'ret': -0.0005041408003307879}}
```
### Data Splits
| Config | train | validation | test |
| ---------- | ---------: | ---------: | --------: |
| large_full | 67,316,227 | 1,585,561 | 2,965,174 |
| large_lite | 67,316,227 | 1,585,561 | 2,965,174 |
| small_full | 200,000 | 20,000 | 20,000 |
| small_lite | 200,000 | 20,000 | 20,000 |
### Dataset Summary Statistics
| Variable | count | mean | std | min | 1% | 25% | 50% | 75% | 99% | max |
| :-------------------------------- | ---------: | ----: | -----: | -----: | -----: | -----: | ----: | ----: | ----: | --------: |
| Unique Firm Count | 4,677 | | | | | | | | | |
| Filings Count | 55,349 | | | | | | | | | |
| Sentence Count | 71,866,962 | | | | | | | | | |
| Filings per Firm | 4,677 | 12 | 9 | 1 | 1 | 4 | 11 | 19 | 27 | 28 |
| Return per Filing - 1d | 55,349 | 0.008 | 0.394 | -0.973 | -0.253 | -0.023 | 0 | 0.02 | 0.367 | 77.977 |
| Return per Filing - 5d | 55,349 | 0.013 | 0.584 | -0.99 | -0.333 | -0.034 | 0 | 0.031 | 0.5 | 100 |
| Return per Filing - 30d | 55,349 | 0.191 | 22.924 | -0.999 | -0.548 | -0.068 | 0.001 | 0.074 | 1 | 5,002.748 |
| Sentences per Filing | 55,349 | 1,299 | 654 | 0 | 110 | 839 | 1,268 | 1,681 | 3,135 | 8,286 |
| Sentences by Section - section_1 | 55,349 | 221 | 183 | 0 | 0 | 97 | 180 | 293 | 852 | 2,724 |
| Sentences by Section - section_10 | 55,349 | 24 | 40 | 0 | 0 | 4 | 6 | 20 | 173 | 1,594 |
| Sentences by Section - section_11 | 55,349 | 16 | 47 | 0 | 0 | 3 | 3 | 4 | 243 | 808 |
| Sentences by Section - section_12 | 55,349 | 9 | 14 | 0 | 0 | 3 | 4 | 8 | 56 | 1,287 |
| Sentences by Section - section_13 | 55,349 | 8 | 20 | 0 | 0 | 3 | 3 | 4 | 79 | 837 |
| Sentences by Section - section_14 | 55,349 | 22 | 93 | 0 | 0 | 3 | 3 | 8 | 413 | 3,536 |
| Sentences by Section - section_15 | 55,349 | 177 | 267 | 0 | 0 | 9 | 26 | 315 | 1104 | 4,140 |
| Sentences by Section - section_1A | 55,349 | 197 | 204 | 0 | 0 | 3 | 158 | 292 | 885 | 2,106 |
| Sentences by Section - section_1B | 55,349 | 4 | 31 | 0 | 0 | 1 | 3 | 3 | 13 | 2,414 |
| Sentences by Section - section_2 | 55,349 | 16 | 45 | 0 | 0 | 6 | 8 | 13 | 169 | 1,903 |
| Sentences by Section - section_3 | 55,349 | 14 | 36 | 0 | 0 | 4 | 5 | 12 | 121 | 2,326 |
| Sentences by Section - section_4 | 55,349 | 7 | 17 | 0 | 0 | 3 | 3 | 4 | 66 | 991 |
| Sentences by Section - section_5 | 55,349 | 20 | 41 | 0 | 0 | 10 | 15 | 21 | 87 | 3,816 |
| Sentences by Section - section_6 | 55,349 | 8 | 29 | 0 | 0 | 3 | 4 | 7 | 43 | 2,156 |
| Sentences by Section - section_7 | 55,349 | 265 | 198 | 0 | 0 | 121 | 246 | 373 | 856 | 4,539 |
| Sentences by Section - section_7A | 55,349 | 18 | 52 | 0 | 0 | 3 | 9 | 21 | 102 | 3,596 |
| Sentences by Section - section_8 | 55,349 | 257 | 296 | 0 | 0 | 3 | 182 | 454 | 1105 | 4,431 |
| Sentences by Section - section_9 | 55,349 | 5 | 33 | 0 | 0 | 3 | 3 | 4 | 18 | 2,330 |
| Sentences by Section - section_9A | 55,349 | 17 | 16 | 0 | 0 | 8 | 15 | 23 | 50 | 794 |
| Sentences by Section - section_9B | 55,349 | 4 | 18 | 0 | 0 | 2 | 3 | 4 | 23 | 813 |
| Word count per Sentence | 71,866,962 | 28 | 22 | 1 | 2 | 16 | 24 | 34 | 98 | 8,675 |
## Dataset Creation
### Curation Rationale
To create this dataset multiple sources of information have to be cleaned and processed for data merging. Starting from the raw filings:
- Useful metadata about the filing and firm was added.
- Time windows around the filing date were carefully created.
- Stock price data was then added for the windows.
- Ambiguous/duplicate records were removed.
### Source Data
#### Initial Data Collection and Normalization
Initial data was collected and processed by the authors of the research paper [**EDGAR-CORPUS: Billions of Tokens Make The World Go Round**](#references). Market price and returns data was collected from Yahoo Finance. Additional metadata was collected from SEC.
#### Who are the source language producers?
US public firms filing with the SEC.
### Annotations
#### Annotation process
Labels for sentiment classification are based on buy-and-hold returns over a fixed time window around the filing date with the SEC i.e. when the data becomes public. Returns are chosen for this process as it reflects the combined market intelligence at parsing the new information in the filings. For each filing date **t** the stock price at **t-1** and **t+W** is used to calculate returns. If, the returns are positive a label of **positive** is assigned else a label of **negative** is assigned. Three different windows are used to assign the labels:
- **1d**: _[t-1, t+1]_
- **5d**: _[t-1, t+5]_
- **30d**: _[t-1, t+30]_
The windows are based on calendar days and are adjusted for weekends and holidays. The rationale behind using 3 windows is as follows:
- A very short window may not give enough time for all the information contained in the filing to be reflected in the stock price.
- A very long window may capture other events that drive stock price for the firm.
#### Who are the annotators?
Financial market participants.
### Personal and Sensitive Information
The dataset contains public filings data from SEC. Market returns data was collected from Yahoo Finance.
## Considerations for Using the Data
### Social Impact of Dataset
Low to none.
### Discussion of Biases
The dataset is about financial information of public companies and as such the tone and style of text is in line with financial literature.
### Other Known Limitations
NA
## Additional Information
### Dataset Curators
**Aman Khan**
### Licensing Information
This dataset is provided under Apache 2.0.
### References
- Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, & Prodromos Malakasiotis. (2021). EDGAR-CORPUS [Data set]. Zenodo. https://doi.org/10.5281/zenodo.5589195
### Citation Information
Please use the following to cite this dataset:
```
@ONLINE{financial-reports-sec,
author = "Aman Khan",
title = "Financial Reports SEC",
url = "https://huggingface.co/datasets/JanosAudran/financial-reports-sec"
}
```
| JanosAudran/financial-reports-sec | [
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:masked-language-modeling",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:extended|other",
"language:en",
"license:apache-2.0",
"'finance",
"financial",
"10-K",
"10K",
"10k",
"10-k",
"annual",
"reports",
"sec",
"edgar",
"sentiment",
"firm",
"public",
"us'",
"region:us"
] | 2023-01-02T15:21:14+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["extended|other"], "task_categories": ["fill-mask", "text-classification"], "task_ids": ["masked-language-modeling", "multi-class-classification", "sentiment-classification"], "pretty_name": "US public firm Annual Reports (10-K)", "tags": ["'finance", "financial", "10-K", "10K", "10k", "10-k", "annual", "reports", "sec", "edgar", "sentiment", "firm", "public", "us'"], "dataset_info": [{"config_name": "large_lite", "features": [{"name": "cik", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "section", "dtype": {"class_label": {"names": {"0": "section_1", "1": "section_10", "2": "section_11", "3": "section_12", "4": "section_13", "5": "section_14", "6": "section_15", "7": "section_1A", "8": "section_1B", "9": "section_2", "10": "section_3", "11": "section_4", "12": "section_5", "13": "section_6", "14": "section_7", "15": "section_7A", "16": "section_8", "17": "section_9", "18": "section_9A", "19": "section_9B"}}}}, {"name": "labels", "struct": [{"name": "1d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}, {"name": "5d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}, {"name": "30d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}]}, {"name": "filingDate", "dtype": "string"}, {"name": "docID", "dtype": "string"}, {"name": "sentenceID", "dtype": "string"}, {"name": "sentenceCount", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 16424576472, "num_examples": 67316227}, {"name": "validation", "num_bytes": 423527281, "num_examples": 1585561}, {"name": "test", "num_bytes": 773116540, "num_examples": 2965174}], "download_size": 13362319126, "dataset_size": 17621220293}, {"config_name": "large_full", "features": [{"name": "cik", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "section", "dtype": {"class_label": {"names": {"0": "section_1", "1": "section_10", "2": "section_11", "3": "section_12", "4": "section_13", "5": "section_14", "6": "section_15", "7": "section_1A", "8": "section_1B", "9": "section_2", "10": "section_3", "11": "section_4", "12": "section_5", "13": "section_6", "14": "section_7", "15": "section_7A", "16": "section_8", "17": "section_9", "18": "section_9A", "19": "section_9B"}}}}, {"name": "labels", "struct": [{"name": "1d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}, {"name": "5d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}, {"name": "30d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}]}, {"name": "filingDate", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "docID", "dtype": "string"}, {"name": "sentenceID", "dtype": "string"}, {"name": "sentenceCount", "dtype": "int64"}, {"name": "tickers", "list": "string"}, {"name": "exchanges", "list": "string"}, {"name": "entityType", "dtype": "string"}, {"name": "sic", "dtype": "string"}, {"name": "stateOfIncorporation", "dtype": "string"}, {"name": "tickerCount", "dtype": "int32"}, {"name": "acceptanceDateTime", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "reportDate", "dtype": "string"}, {"name": "returns", "struct": [{"name": "1d", "struct": [{"name": "closePriceEndDate", "dtype": "float32"}, {"name": "closePriceStartDate", "dtype": "float32"}, {"name": "endDate", "dtype": "string"}, {"name": "startDate", "dtype": "string"}, {"name": "ret", "dtype": "float32"}]}, {"name": "5d", "struct": [{"name": "closePriceEndDate", "dtype": "float32"}, {"name": "closePriceStartDate", "dtype": "float32"}, {"name": "endDate", "dtype": "string"}, {"name": "startDate", "dtype": "string"}, {"name": "ret", "dtype": "float32"}]}, {"name": "30d", "struct": [{"name": "closePriceEndDate", "dtype": "float32"}, {"name": "closePriceStartDate", "dtype": "float32"}, {"name": "endDate", "dtype": "string"}, {"name": "startDate", "dtype": "string"}, {"name": "ret", "dtype": "float32"}]}]}], "splits": [{"name": "train", "num_bytes": 39306095718, "num_examples": 67316227}, {"name": "validation", "num_bytes": 964030458, "num_examples": 1585561}, {"name": "test", "num_bytes": 1785383996, "num_examples": 2965174}], "download_size": 13362319126, "dataset_size": 42055510172}, {"config_name": "small_full", "features": [{"name": "cik", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "section", "dtype": {"class_label": {"names": {"0": "section_1", "1": "section_1A", "2": "section_1B", "3": "section_2", "4": "section_3", "5": "section_4", "6": "section_5", "7": "section_6", "8": "section_7", "9": "section_7A", "10": "section_8", "11": "section_9", "12": "section_9A", "13": "section_9B", "14": "section_10", "15": "section_11", "16": "section_12", "17": "section_13", "18": "section_14", "19": "section_15"}}}}, {"name": "labels", "struct": [{"name": "1d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}, {"name": "5d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}, {"name": "30d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}]}, {"name": "filingDate", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "docID", "dtype": "string"}, {"name": "sentenceID", "dtype": "string"}, {"name": "sentenceCount", "dtype": "int64"}, {"name": "tickers", "list": "string"}, {"name": "exchanges", "list": "string"}, {"name": "entityType", "dtype": "string"}, {"name": "sic", "dtype": "string"}, {"name": "stateOfIncorporation", "dtype": "string"}, {"name": "tickerCount", "dtype": "int32"}, {"name": "acceptanceDateTime", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "reportDate", "dtype": "string"}, {"name": "returns", "struct": [{"name": "1d", "struct": [{"name": "closePriceEndDate", "dtype": "float32"}, {"name": "closePriceStartDate", "dtype": "float32"}, {"name": "endDate", "dtype": "string"}, {"name": "startDate", "dtype": "string"}, {"name": "ret", "dtype": "float32"}]}, {"name": "5d", "struct": [{"name": "closePriceEndDate", "dtype": "float32"}, {"name": "closePriceStartDate", "dtype": "float32"}, {"name": "endDate", "dtype": "string"}, {"name": "startDate", "dtype": "string"}, {"name": "ret", "dtype": "float32"}]}, {"name": "30d", "struct": [{"name": "closePriceEndDate", "dtype": "float32"}, {"name": "closePriceStartDate", "dtype": "float32"}, {"name": "endDate", "dtype": "string"}, {"name": "startDate", "dtype": "string"}, {"name": "ret", "dtype": "float32"}]}]}], "splits": [{"name": "train", "num_bytes": 128731540, "num_examples": 200000}, {"name": "validation", "num_bytes": 13411689, "num_examples": 20000}, {"name": "test", "num_bytes": 13188331, "num_examples": 20000}], "download_size": 42764380, "dataset_size": 155331560}, {"config_name": "small_lite", "features": [{"name": "cik", "dtype": "string"}, {"name": "sentence", "dtype": "string"}, {"name": "section", "dtype": {"class_label": {"names": {"0": "section_1", "1": "section_1A", "2": "section_1B", "3": "section_2", "4": "section_3", "5": "section_4", "6": "section_5", "7": "section_6", "8": "section_7", "9": "section_7A", "10": "section_8", "11": "section_9", "12": "section_9A", "13": "section_9B", "14": "section_10", "15": "section_11", "16": "section_12", "17": "section_13", "18": "section_14", "19": "section_15"}}}}, {"name": "labels", "struct": [{"name": "1d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}, {"name": "5d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}, {"name": "30d", "dtype": {"class_label": {"names": {"0": "positive", "1": "negative"}}}}]}, {"name": "filingDate", "dtype": "string"}, {"name": "docID", "dtype": "string"}, {"name": "sentenceID", "dtype": "string"}, {"name": "sentenceCount", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 60681688, "num_examples": 200000}, {"name": "validation", "num_bytes": 6677389, "num_examples": 20000}, {"name": "test", "num_bytes": 6351730, "num_examples": 20000}], "download_size": 42764380, "dataset_size": 73710807}]} | 2023-01-06T17:44:08+00:00 | [] | [
"en"
] | TAGS
#task_categories-fill-mask #task_categories-text-classification #task_ids-masked-language-modeling #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-extended|other #language-English #license-apache-2.0 #'finance #financial #10-K #10K #10k #10-k #annual #reports #sec #edgar #sentiment #firm #public #us' #region-us
| Dataset Card for [financial-reports-sec]
========================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Dataset Configurations
+ Usage
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
+ Dataset Summary Statistics
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ References
+ Citation Information
Dataset Description
-------------------
* Point of Contact: Aman Khan
### Dataset Summary
The dataset contains the annual report of US public firms filing with the SEC EDGAR system from 1993-2020. Each annual report (10K filing) is broken into 20 sections. Each section is split into individual sentences. Sentiment labels are provided on a per filing basis from the market reaction around the filing date for 3 different time windows *[t-1, t+1]*, *[t-1, t+5]* and *[t-1, t+30]*. Additional metadata for each filing is included in the dataset.
### Dataset Configurations
Four configurations are available:
* *large\_lite*:
+ Contains only the basic features needed. Extra metadata is ommitted.
+ Features List:
- cik
- sentence
- section
- labels
- filingDate
- docID
- sentenceID
- sentenceCount
* *large\_full*:
+ All features are included.
+ Features List (excluding those already in the lite verison above):
- name
- tickers
- exchanges
- entityType
- sic
- stateOfIncorporation
- tickerCount
- acceptanceDateTime
- form
- reportDate
- returns
* *small\_lite*:
+ Same as *large\_lite* version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.
* *small\_full*:
+ Same as *large\_full* version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.
### Usage
### Supported Tasks
The tasks the dataset can be used directly for includes:
* *Masked Language Modelling*
+ A model like BERT can be fine-tuned on this corpus of financial text.
* *Sentiment Analysis*
+ For each annual report a label ["positive", "negative"] is provided based on the market reaction around the filing date (refer to Annotations).
* *Next Sentence Prediction/Sentence Order Prediction*
+ Sentences extracted from the filings are in their original order and as such the dataset can be adapted very easily for either of these tasks.
### Languages
All sentences are in English.
Dataset Structure
-----------------
### Data Instances
Refer to dataset preview.
### Data Fields
Feature Name
* Description
* Data type
* Example/Structure
cik
* 10 digit identifier used by SEC for a firm.
* *string*
* '0000001750'
sentence
* A single sentence from the 10-K filing.
* *string*
* 'The finance agreement is secured by a first priority security interest in all insurance policies, all unearned premium, return premiums, dividend payments and loss payments thereof.'
section
* The section of the 10-K filing the sentence is located.
* *ClassLabel*
*
labels
* The sentiment label for the entire filing (*positve* or *negative*) based on different time windows.
* *Dict of ClassLables*
*
filingDate
* The date the 10-K report was filed with the SEC.
* *string*
* '2021-03-10'
docID
* Unique ID for identifying the exact 10-K filing. Unique across all configs and splits. Can be used to identify the document from which the sentence came from.
* *string*
* '0000001750\_10-K\_2020'
sentenceID
* Unique ID for identifying the exact sentence. Unique across all configs and splits.
* *string*
* '0000001750\_10-K\_2020\_section\_1\_100'
sentenceCount
* Integer identiying the running sequence for the sentence. Unique only for a given config and split.
* *string*
* 123
name
* The name of the filing entity
* *string*
* 'Investar Holding Corp'
tickers
* List of ticker symbols for the filing entity.
* *List of strings*
* ['ISTR']
exchanges
* List of exchanges for the filing entity.
* *List of strings*
* ['Nasdaq']
entityType
* The type of entity as identified in the 10-K filing.
* *string*
* 'operating'
sic
* Four digit SIC code for the filing entity.
* *string*
* '6022'
stateOfIncorporation
* Two character code for the state of incorporation for the filing entity.
* *string*
* 'LA'
tickerCount
* *Internal use*. Count of ticker symbols. Always 1.
* *int*
* 1
acceptanceDateTime
* The full timestamp of when the filing was accepted into the SEC EDGAR system.
* *string*
* '2021-03-10T14:26:11.000Z'
form
* The type of filing. Always 10-K in the dataset.
* *string*
* '10-K'
reportDate
* The last date in the fiscal year for which the entity is filing the report.
* *string*
* '2020-12-31'
returns
* *Internal use*. The prices and timestamps used to calculate the sentiment labels.
* *Dict*
*
### Data Splits
### Dataset Summary Statistics
Dataset Creation
----------------
### Curation Rationale
To create this dataset multiple sources of information have to be cleaned and processed for data merging. Starting from the raw filings:
* Useful metadata about the filing and firm was added.
* Time windows around the filing date were carefully created.
* Stock price data was then added for the windows.
* Ambiguous/duplicate records were removed.
### Source Data
#### Initial Data Collection and Normalization
Initial data was collected and processed by the authors of the research paper EDGAR-CORPUS: Billions of Tokens Make The World Go Round. Market price and returns data was collected from Yahoo Finance. Additional metadata was collected from SEC.
#### Who are the source language producers?
US public firms filing with the SEC.
### Annotations
#### Annotation process
Labels for sentiment classification are based on buy-and-hold returns over a fixed time window around the filing date with the SEC i.e. when the data becomes public. Returns are chosen for this process as it reflects the combined market intelligence at parsing the new information in the filings. For each filing date t the stock price at t-1 and t+W is used to calculate returns. If, the returns are positive a label of positive is assigned else a label of negative is assigned. Three different windows are used to assign the labels:
* 1d: *[t-1, t+1]*
* 5d: *[t-1, t+5]*
* 30d: *[t-1, t+30]*
The windows are based on calendar days and are adjusted for weekends and holidays. The rationale behind using 3 windows is as follows:
* A very short window may not give enough time for all the information contained in the filing to be reflected in the stock price.
* A very long window may capture other events that drive stock price for the firm.
#### Who are the annotators?
Financial market participants.
### Personal and Sensitive Information
The dataset contains public filings data from SEC. Market returns data was collected from Yahoo Finance.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
Low to none.
### Discussion of Biases
The dataset is about financial information of public companies and as such the tone and style of text is in line with financial literature.
### Other Known Limitations
NA
Additional Information
----------------------
### Dataset Curators
Aman Khan
### Licensing Information
This dataset is provided under Apache 2.0.
### References
* Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, & Prodromos Malakasiotis. (2021). EDGAR-CORPUS [Data set]. Zenodo. URL
Please use the following to cite this dataset:
| [
"### Dataset Summary\n\n\nThe dataset contains the annual report of US public firms filing with the SEC EDGAR system from 1993-2020. Each annual report (10K filing) is broken into 20 sections. Each section is split into individual sentences. Sentiment labels are provided on a per filing basis from the market reaction around the filing date for 3 different time windows *[t-1, t+1]*, *[t-1, t+5]* and *[t-1, t+30]*. Additional metadata for each filing is included in the dataset.",
"### Dataset Configurations\n\n\nFour configurations are available:\n\n\n* *large\\_lite*:\n\t+ Contains only the basic features needed. Extra metadata is ommitted.\n\t+ Features List:\n\t\t- cik\n\t\t- sentence\n\t\t- section\n\t\t- labels\n\t\t- filingDate\n\t\t- docID\n\t\t- sentenceID\n\t\t- sentenceCount\n* *large\\_full*:\n\t+ All features are included.\n\t+ Features List (excluding those already in the lite verison above):\n\t\t- name\n\t\t- tickers\n\t\t- exchanges\n\t\t- entityType\n\t\t- sic\n\t\t- stateOfIncorporation\n\t\t- tickerCount\n\t\t- acceptanceDateTime\n\t\t- form\n\t\t- reportDate\n\t\t- returns\n* *small\\_lite*:\n\t+ Same as *large\\_lite* version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.\n* *small\\_full*:\n\t+ Same as *large\\_full* version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.",
"### Usage",
"### Supported Tasks\n\n\nThe tasks the dataset can be used directly for includes:\n\n\n* *Masked Language Modelling*\n\t+ A model like BERT can be fine-tuned on this corpus of financial text.\n* *Sentiment Analysis*\n\t+ For each annual report a label [\"positive\", \"negative\"] is provided based on the market reaction around the filing date (refer to Annotations).\n* *Next Sentence Prediction/Sentence Order Prediction*\n\t+ Sentences extracted from the filings are in their original order and as such the dataset can be adapted very easily for either of these tasks.",
"### Languages\n\n\nAll sentences are in English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nRefer to dataset preview.",
"### Data Fields\n\n\nFeature Name\n\n\n* Description\n* Data type\n* Example/Structure\n\n\ncik\n\n\n* 10 digit identifier used by SEC for a firm.\n* *string*\n* '0000001750'\n\n\nsentence\n\n\n* A single sentence from the 10-K filing.\n* *string*\n* 'The finance agreement is secured by a first priority security interest in all insurance policies, all unearned premium, return premiums, dividend payments and loss payments thereof.'\n\n\nsection\n\n\n* The section of the 10-K filing the sentence is located.\n* *ClassLabel*\n* \n\n\nlabels\n\n\n* The sentiment label for the entire filing (*positve* or *negative*) based on different time windows.\n* *Dict of ClassLables*\n* \n\n\nfilingDate\n\n\n* The date the 10-K report was filed with the SEC.\n* *string*\n* '2021-03-10'\n\n\ndocID\n\n\n* Unique ID for identifying the exact 10-K filing. Unique across all configs and splits. Can be used to identify the document from which the sentence came from.\n* *string*\n* '0000001750\\_10-K\\_2020'\n\n\nsentenceID\n\n\n* Unique ID for identifying the exact sentence. Unique across all configs and splits.\n* *string*\n* '0000001750\\_10-K\\_2020\\_section\\_1\\_100'\n\n\nsentenceCount\n\n\n* Integer identiying the running sequence for the sentence. Unique only for a given config and split.\n* *string*\n* 123\n\n\nname\n\n\n* The name of the filing entity\n* *string*\n* 'Investar Holding Corp'\n\n\ntickers\n\n\n* List of ticker symbols for the filing entity.\n* *List of strings*\n* ['ISTR']\n\n\nexchanges\n\n\n* List of exchanges for the filing entity.\n* *List of strings*\n* ['Nasdaq']\n\n\nentityType\n\n\n* The type of entity as identified in the 10-K filing.\n* *string*\n* 'operating'\n\n\nsic\n\n\n* Four digit SIC code for the filing entity.\n* *string*\n* '6022'\n\n\nstateOfIncorporation\n\n\n* Two character code for the state of incorporation for the filing entity.\n* *string*\n* 'LA'\n\n\ntickerCount\n\n\n* *Internal use*. Count of ticker symbols. Always 1.\n* *int*\n* 1\n\n\nacceptanceDateTime\n\n\n* The full timestamp of when the filing was accepted into the SEC EDGAR system.\n* *string*\n* '2021-03-10T14:26:11.000Z'\n\n\nform\n\n\n* The type of filing. Always 10-K in the dataset.\n* *string*\n* '10-K'\n\n\nreportDate\n\n\n* The last date in the fiscal year for which the entity is filing the report.\n* *string*\n* '2020-12-31'\n\n\nreturns\n\n\n* *Internal use*. The prices and timestamps used to calculate the sentiment labels.\n* *Dict*\n*",
"### Data Splits",
"### Dataset Summary Statistics\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTo create this dataset multiple sources of information have to be cleaned and processed for data merging. Starting from the raw filings:\n\n\n* Useful metadata about the filing and firm was added.\n* Time windows around the filing date were carefully created.\n* Stock price data was then added for the windows.\n* Ambiguous/duplicate records were removed.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nInitial data was collected and processed by the authors of the research paper EDGAR-CORPUS: Billions of Tokens Make The World Go Round. Market price and returns data was collected from Yahoo Finance. Additional metadata was collected from SEC.",
"#### Who are the source language producers?\n\n\nUS public firms filing with the SEC.",
"### Annotations",
"#### Annotation process\n\n\nLabels for sentiment classification are based on buy-and-hold returns over a fixed time window around the filing date with the SEC i.e. when the data becomes public. Returns are chosen for this process as it reflects the combined market intelligence at parsing the new information in the filings. For each filing date t the stock price at t-1 and t+W is used to calculate returns. If, the returns are positive a label of positive is assigned else a label of negative is assigned. Three different windows are used to assign the labels:\n\n\n* 1d: *[t-1, t+1]*\n* 5d: *[t-1, t+5]*\n* 30d: *[t-1, t+30]*\n\n\nThe windows are based on calendar days and are adjusted for weekends and holidays. The rationale behind using 3 windows is as follows:\n\n\n* A very short window may not give enough time for all the information contained in the filing to be reflected in the stock price.\n* A very long window may capture other events that drive stock price for the firm.",
"#### Who are the annotators?\n\n\nFinancial market participants.",
"### Personal and Sensitive Information\n\n\nThe dataset contains public filings data from SEC. Market returns data was collected from Yahoo Finance.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nLow to none.",
"### Discussion of Biases\n\n\nThe dataset is about financial information of public companies and as such the tone and style of text is in line with financial literature.",
"### Other Known Limitations\n\n\nNA\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nAman Khan",
"### Licensing Information\n\n\nThis dataset is provided under Apache 2.0.",
"### References\n\n\n* Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, & Prodromos Malakasiotis. (2021). EDGAR-CORPUS [Data set]. Zenodo. URL\n\n\nPlease use the following to cite this dataset:"
] | [
"TAGS\n#task_categories-fill-mask #task_categories-text-classification #task_ids-masked-language-modeling #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-extended|other #language-English #license-apache-2.0 #'finance #financial #10-K #10K #10k #10-k #annual #reports #sec #edgar #sentiment #firm #public #us' #region-us \n",
"### Dataset Summary\n\n\nThe dataset contains the annual report of US public firms filing with the SEC EDGAR system from 1993-2020. Each annual report (10K filing) is broken into 20 sections. Each section is split into individual sentences. Sentiment labels are provided on a per filing basis from the market reaction around the filing date for 3 different time windows *[t-1, t+1]*, *[t-1, t+5]* and *[t-1, t+30]*. Additional metadata for each filing is included in the dataset.",
"### Dataset Configurations\n\n\nFour configurations are available:\n\n\n* *large\\_lite*:\n\t+ Contains only the basic features needed. Extra metadata is ommitted.\n\t+ Features List:\n\t\t- cik\n\t\t- sentence\n\t\t- section\n\t\t- labels\n\t\t- filingDate\n\t\t- docID\n\t\t- sentenceID\n\t\t- sentenceCount\n* *large\\_full*:\n\t+ All features are included.\n\t+ Features List (excluding those already in the lite verison above):\n\t\t- name\n\t\t- tickers\n\t\t- exchanges\n\t\t- entityType\n\t\t- sic\n\t\t- stateOfIncorporation\n\t\t- tickerCount\n\t\t- acceptanceDateTime\n\t\t- form\n\t\t- reportDate\n\t\t- returns\n* *small\\_lite*:\n\t+ Same as *large\\_lite* version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.\n* *small\\_full*:\n\t+ Same as *large\\_full* version except that only (200,000/20,000/20,000) sentences are loaded for (train/test/validation) splits.",
"### Usage",
"### Supported Tasks\n\n\nThe tasks the dataset can be used directly for includes:\n\n\n* *Masked Language Modelling*\n\t+ A model like BERT can be fine-tuned on this corpus of financial text.\n* *Sentiment Analysis*\n\t+ For each annual report a label [\"positive\", \"negative\"] is provided based on the market reaction around the filing date (refer to Annotations).\n* *Next Sentence Prediction/Sentence Order Prediction*\n\t+ Sentences extracted from the filings are in their original order and as such the dataset can be adapted very easily for either of these tasks.",
"### Languages\n\n\nAll sentences are in English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nRefer to dataset preview.",
"### Data Fields\n\n\nFeature Name\n\n\n* Description\n* Data type\n* Example/Structure\n\n\ncik\n\n\n* 10 digit identifier used by SEC for a firm.\n* *string*\n* '0000001750'\n\n\nsentence\n\n\n* A single sentence from the 10-K filing.\n* *string*\n* 'The finance agreement is secured by a first priority security interest in all insurance policies, all unearned premium, return premiums, dividend payments and loss payments thereof.'\n\n\nsection\n\n\n* The section of the 10-K filing the sentence is located.\n* *ClassLabel*\n* \n\n\nlabels\n\n\n* The sentiment label for the entire filing (*positve* or *negative*) based on different time windows.\n* *Dict of ClassLables*\n* \n\n\nfilingDate\n\n\n* The date the 10-K report was filed with the SEC.\n* *string*\n* '2021-03-10'\n\n\ndocID\n\n\n* Unique ID for identifying the exact 10-K filing. Unique across all configs and splits. Can be used to identify the document from which the sentence came from.\n* *string*\n* '0000001750\\_10-K\\_2020'\n\n\nsentenceID\n\n\n* Unique ID for identifying the exact sentence. Unique across all configs and splits.\n* *string*\n* '0000001750\\_10-K\\_2020\\_section\\_1\\_100'\n\n\nsentenceCount\n\n\n* Integer identiying the running sequence for the sentence. Unique only for a given config and split.\n* *string*\n* 123\n\n\nname\n\n\n* The name of the filing entity\n* *string*\n* 'Investar Holding Corp'\n\n\ntickers\n\n\n* List of ticker symbols for the filing entity.\n* *List of strings*\n* ['ISTR']\n\n\nexchanges\n\n\n* List of exchanges for the filing entity.\n* *List of strings*\n* ['Nasdaq']\n\n\nentityType\n\n\n* The type of entity as identified in the 10-K filing.\n* *string*\n* 'operating'\n\n\nsic\n\n\n* Four digit SIC code for the filing entity.\n* *string*\n* '6022'\n\n\nstateOfIncorporation\n\n\n* Two character code for the state of incorporation for the filing entity.\n* *string*\n* 'LA'\n\n\ntickerCount\n\n\n* *Internal use*. Count of ticker symbols. Always 1.\n* *int*\n* 1\n\n\nacceptanceDateTime\n\n\n* The full timestamp of when the filing was accepted into the SEC EDGAR system.\n* *string*\n* '2021-03-10T14:26:11.000Z'\n\n\nform\n\n\n* The type of filing. Always 10-K in the dataset.\n* *string*\n* '10-K'\n\n\nreportDate\n\n\n* The last date in the fiscal year for which the entity is filing the report.\n* *string*\n* '2020-12-31'\n\n\nreturns\n\n\n* *Internal use*. The prices and timestamps used to calculate the sentiment labels.\n* *Dict*\n*",
"### Data Splits",
"### Dataset Summary Statistics\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nTo create this dataset multiple sources of information have to be cleaned and processed for data merging. Starting from the raw filings:\n\n\n* Useful metadata about the filing and firm was added.\n* Time windows around the filing date were carefully created.\n* Stock price data was then added for the windows.\n* Ambiguous/duplicate records were removed.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nInitial data was collected and processed by the authors of the research paper EDGAR-CORPUS: Billions of Tokens Make The World Go Round. Market price and returns data was collected from Yahoo Finance. Additional metadata was collected from SEC.",
"#### Who are the source language producers?\n\n\nUS public firms filing with the SEC.",
"### Annotations",
"#### Annotation process\n\n\nLabels for sentiment classification are based on buy-and-hold returns over a fixed time window around the filing date with the SEC i.e. when the data becomes public. Returns are chosen for this process as it reflects the combined market intelligence at parsing the new information in the filings. For each filing date t the stock price at t-1 and t+W is used to calculate returns. If, the returns are positive a label of positive is assigned else a label of negative is assigned. Three different windows are used to assign the labels:\n\n\n* 1d: *[t-1, t+1]*\n* 5d: *[t-1, t+5]*\n* 30d: *[t-1, t+30]*\n\n\nThe windows are based on calendar days and are adjusted for weekends and holidays. The rationale behind using 3 windows is as follows:\n\n\n* A very short window may not give enough time for all the information contained in the filing to be reflected in the stock price.\n* A very long window may capture other events that drive stock price for the firm.",
"#### Who are the annotators?\n\n\nFinancial market participants.",
"### Personal and Sensitive Information\n\n\nThe dataset contains public filings data from SEC. Market returns data was collected from Yahoo Finance.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nLow to none.",
"### Discussion of Biases\n\n\nThe dataset is about financial information of public companies and as such the tone and style of text is in line with financial literature.",
"### Other Known Limitations\n\n\nNA\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nAman Khan",
"### Licensing Information\n\n\nThis dataset is provided under Apache 2.0.",
"### References\n\n\n* Lefteris Loukas, Manos Fergadiotis, Ion Androutsopoulos, & Prodromos Malakasiotis. (2021). EDGAR-CORPUS [Data set]. Zenodo. URL\n\n\nPlease use the following to cite this dataset:"
] |
d23f740082eba235d37aa73b33b1635c6f5ee8fe | # Dataset Card for "test_repo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pyakymenko/test_repo | [
"region:us"
] | 2023-01-02T15:25:54+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 167117.0, "num_examples": 3}], "download_size": 162079, "dataset_size": 167117.0}} | 2023-01-02T15:26:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_repo"
More Information needed | [
"# Dataset Card for \"test_repo\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_repo\"\n\nMore Information needed"
] |
f1c139eea788f137cf0e20f07fdf0fadee13784e | This dataset contains only 100 hours train data of librispeech_clean. Functionality of librispeech-other and test-clean and dev-clean is unchanged | rohitp1/librispeech_asr_clean | [
"license:cc-by-4.0",
"region:us"
] | 2023-01-02T15:42:43+00:00 | {"license": "cc-by-4.0"} | 2023-01-03T18:08:17+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| This dataset contains only 100 hours train data of librispeech_clean. Functionality of librispeech-other and test-clean and dev-clean is unchanged | [] | [
"TAGS\n#license-cc-by-4.0 #region-us \n"
] |
866af329e5cf8a061d8e991f6539b16f24ae3e71 | # Dataset Card for "dsn"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | styskin/dsn | [
"region:us"
] | 2023-01-02T15:57:13+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2779967.0, "num_examples": 100}], "download_size": 2726219, "dataset_size": 2779967.0}} | 2023-01-02T16:19:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dsn"
More Information needed | [
"# Dataset Card for \"dsn\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dsn\"\n\nMore Information needed"
] |
879b465814c965cb874747a97e58d14e7e9f7f0f | # Dataset Card for "blip-preprocessed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | peeper/blip-preprocessed | [
"region:us"
] | 2023-01-02T17:45:30+00:00 | {"dataset_info": {"features": [{"name": "labels", "sequence": "int64"}, {"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 7522975512, "num_examples": 4238}, {"name": "test", "num_bytes": 2508250212, "num_examples": 1413}], "download_size": 2847165063, "dataset_size": 10031225724}} | 2023-01-03T10:37:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "blip-preprocessed"
More Information needed | [
"# Dataset Card for \"blip-preprocessed\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"blip-preprocessed\"\n\nMore Information needed"
] |
b0b0fd8e86e0179c81a84e1651d6f4502230ce5e |
# Shylily Character Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/shylily/resolve/main/shylily_showcase.png"/>
## Disclaimer
This is an embedding based on the VTuber Shylily, which can be found / watched on Twitch:
https://www.twitch.tv/shylily
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"shy_lily"```
Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(shy_lily:0.8)"```, but in this case the embedding basically works on almost all strength.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/shylily | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
] | 2023-01-02T18:45:06+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/shylily/resolve/main/shylily_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2023-01-02T18:49:16+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
|
# Shylily Character Embedding / Textual Inversion
<img alt="Showcase" src="URL
## Disclaimer
This is an embedding based on the VTuber Shylily, which can be found / watched on Twitch:
URL
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
Personally, I would recommend to use my embeddings with a strength of 0.8, like , but in this case the embedding basically works on almost all strength.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here | [
"# Shylily Character Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL",
"## Disclaimer\n\nThis is an embedding based on the VTuber Shylily, which can be found / watched on Twitch:\nURL",
"## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like , but in this case the embedding basically works on almost all strength.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here"
] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n",
"# Shylily Character Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL",
"## Disclaimer\n\nThis is an embedding based on the VTuber Shylily, which can be found / watched on Twitch:\nURL",
"## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like , but in this case the embedding basically works on almost all strength.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here"
] |
7321296d0db2953997096254d43abb79d5dd0d3c | # Dataset Card for "vitmae-roberta-processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | peeper/vitmae-roberta-processed | [
"region:us"
] | 2023-01-02T19:43:21+00:00 | {"dataset_info": {"features": [{"name": "labels", "sequence": "int64"}, {"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 2567566872, "num_examples": 4238}, {"name": "test", "num_bytes": 856057572, "num_examples": 1413}], "download_size": 1000718544, "dataset_size": 3423624444}} | 2023-01-02T19:45:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vitmae-roberta-processed"
More Information needed | [
"# Dataset Card for \"vitmae-roberta-processed\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vitmae-roberta-processed\"\n\nMore Information needed"
] |
62093e4100fd5c64090ec50e7e366300cef776f1 | # Dataset Card for "es_Nautical_Text_NGRAMS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | alvarochelo/es_Nautical_Text_NGRAMS | [
"region:us"
] | 2023-01-02T20:00:38+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 473, "num_examples": 1}], "download_size": 0, "dataset_size": 473}} | 2023-01-03T21:46:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "es_Nautical_Text_NGRAMS"
More Information needed | [
"# Dataset Card for \"es_Nautical_Text_NGRAMS\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"es_Nautical_Text_NGRAMS\"\n\nMore Information needed"
] |
984e8a95dd5a663c67e92a0abb4fd549c024b177 |
### Roboflow Dataset Page
[https://universe.roboflow.com/riis/aerial-sheep/dataset/1](https://universe.roboflow.com/riis/aerial-sheep/dataset/1?ref=roboflow2huggingface)
### Dataset Labels
```
['sheep']
```
### Citation
```
@misc{ aerial-sheep_dataset,
title = { Aerial Sheep Dataset },
type = { Open Source Dataset },
author = { Riis },
howpublished = { \\url{ https://universe.roboflow.com/riis/aerial-sheep } },
url = { https://universe.roboflow.com/riis/aerial-sheep },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { jun },
note = { visited on 2023-01-02 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 2, 2022 at 4:47 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 4133 images.
Sheep are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 600x600 (Stretch)
The following augmentation was applied to create 3 versions of each source image:
* 50% probability of horizontal flip
* 50% probability of vertical flip
* Randomly crop between 0 and 20 percent of the image
* Random brigthness adjustment of between -15 and +15 percent
* Random exposure adjustment of between -10 and +10 percent
| keremberke/aerial-sheep-object-detection | [
"task_categories:object-detection",
"roboflow",
"region:us"
] | 2023-01-02T20:17:28+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow"]} | 2023-01-05T08:02:23+00:00 | [] | [] | TAGS
#task_categories-object-detection #roboflow #region-us
|
### Roboflow Dataset Page
URL
### Dataset Labels
### License
Public Domain
### Dataset Summary
This dataset was exported via URL on December 2, 2022 at 4:47 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 4133 images.
Sheep are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 600x600 (Stretch)
The following augmentation was applied to create 3 versions of each source image:
* 50% probability of horizontal flip
* 50% probability of vertical flip
* Randomly crop between 0 and 20 percent of the image
* Random brigthness adjustment of between -15 and +15 percent
* Random exposure adjustment of between -10 and +10 percent
| [
"### Roboflow Dataset Page\nURL",
"### Dataset Labels",
"### License\nPublic Domain",
"### Dataset Summary\nThis dataset was exported via URL on December 2, 2022 at 4:47 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 4133 images.\nSheep are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 600x600 (Stretch)\n\nThe following augmentation was applied to create 3 versions of each source image:\n* 50% probability of horizontal flip\n* 50% probability of vertical flip\n* Randomly crop between 0 and 20 percent of the image\n* Random brigthness adjustment of between -15 and +15 percent\n* Random exposure adjustment of between -10 and +10 percent"
] | [
"TAGS\n#task_categories-object-detection #roboflow #region-us \n",
"### Roboflow Dataset Page\nURL",
"### Dataset Labels",
"### License\nPublic Domain",
"### Dataset Summary\nThis dataset was exported via URL on December 2, 2022 at 4:47 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 4133 images.\nSheep are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 600x600 (Stretch)\n\nThe following augmentation was applied to create 3 versions of each source image:\n* 50% probability of horizontal flip\n* 50% probability of vertical flip\n* Randomly crop between 0 and 20 percent of the image\n* Random brigthness adjustment of between -15 and +15 percent\n* Random exposure adjustment of between -10 and +10 percent"
] |
c85ee099f4a4ef35662c9745c3104d14504a9be0 | # Dataset Card for MiningLegalArguments
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/trusthlt/mining-legal-arguments)
- **Repository:**
- **Paper:** [ArXiv](https://arxiv.org/pdf/2208.06178.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
| joelniklaus/mining_legal_arguments_agent | [
"license:apache-2.0",
"arxiv:2208.06178",
"region:us"
] | 2023-01-02T20:42:53+00:00 | {"license": "apache-2.0"} | 2023-01-02T20:51:41+00:00 | [
"2208.06178"
] | [] | TAGS
#license-apache-2.0 #arxiv-2208.06178 #region-us
| # Dataset Card for MiningLegalArguments
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: GitHub
- Repository:
- Paper: ArXiv
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @JoelNiklaus for adding this dataset.
| [
"# Dataset Card for MiningLegalArguments",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: GitHub\n- Repository:\n- Paper: ArXiv\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @JoelNiklaus for adding this dataset."
] | [
"TAGS\n#license-apache-2.0 #arxiv-2208.06178 #region-us \n",
"# Dataset Card for MiningLegalArguments",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: GitHub\n- Repository:\n- Paper: ArXiv\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @JoelNiklaus for adding this dataset."
] |
1e659f6090028fa1d8eeedba98ada104bf4bfc98 | # Dataset Card for MiningLegalArguments
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/trusthlt/mining-legal-arguments)
- **Repository:**
- **Paper:** [ArXiv](https://arxiv.org/pdf/2208.06178.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
| joelniklaus/mining_legal_arguments_argType | [
"license:apache-2.0",
"arxiv:2208.06178",
"region:us"
] | 2023-01-02T20:44:27+00:00 | {"license": "apache-2.0"} | 2023-01-02T20:51:23+00:00 | [
"2208.06178"
] | [] | TAGS
#license-apache-2.0 #arxiv-2208.06178 #region-us
| # Dataset Card for MiningLegalArguments
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: GitHub
- Repository:
- Paper: ArXiv
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @JoelNiklaus for adding this dataset.
| [
"# Dataset Card for MiningLegalArguments",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: GitHub\n- Repository:\n- Paper: ArXiv\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @JoelNiklaus for adding this dataset."
] | [
"TAGS\n#license-apache-2.0 #arxiv-2208.06178 #region-us \n",
"# Dataset Card for MiningLegalArguments",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: GitHub\n- Repository:\n- Paper: ArXiv\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @JoelNiklaus for adding this dataset."
] |
b451718aa256953c500322bdbacbf3aee2756004 |
Description: The National Health and Nutrition Examination Survey (NHANES) provides data and have considerable potential to study the health and environmental exposure of the non-institutionalized US population. However, as NHANES data are plagued with multiple inconsistencies, processing these data is required before deriving new insights through large-scale analyses. Thus, we developed a set of curated and unified datasets by merging 614 separate files and harmonizing unrestricted data across NHANES III (1988-1994) and Continuous (1999-2018), totaling 135,310 participants and 5,078 variables. The variables convey
1. demographics (281 variables),
2. dietary consumption (324 variables),
3. physiological functions (1,027 variables),
4. occupation (61 variables),
5. questionnaires (1444 variables, e.g., physical activity, medical conditions, diabetes, reproductive health, blood pressure and cholesterol, early childhood),
6. medications (29 variables),
7. mortality information linked from the National Death Index (15 variables),
8. survey weights (857 variables),
9. environmental exposure biomarker measurements (598 variables), and
10. chemical comments indicating which measurements are below or above the lower limit of detection (505 variables).
csv Data Record: The curated NHANES datasets and the data dictionaries includes 23 .csv files and 1 excel file.
- The curated NHANES datasets involves 20 .csv formatted files, two for each module with one as the uncleaned version and the other as the cleaned version. The modules are labeled as the following: 1) mortality, 2) dietary, 3) demographics, 4) response, 5) medications, 6) questionnaire, 7) chemicals, 8) occupation, 9) weights, and 10) comments.
- "dictionary\_nhanes.csv" is a dictionary that lists the variable name, description, module, category, units, CAS Number, comment use, chemical family, chemical family shortened, number of measurements, and cycles available for all 5,078 variables in NHANES.
- "dictionary\_harmonized\_categories.csv" contains the harmonized categories for the categorical variables.
- “dictionary\_drug\_codes.csv” contains the dictionary for descriptors on the drugs codes.
- “nhanes\_inconsistencies\_documentation.xlsx” is an excel file that contains the cleaning documentation, which records all the inconsistencies for all affected variables to help curate each of the NHANES modules.
R Data Record: For researchers who want to conduct their analysis in the R programming language, only cleaned NHANES modules and the data dictionaries can be downloaded as a .zip file which include an .RData file and an .R file.
- “w - nhanes_1988\_2018.RData” contains all the aforementioned datasets as R data objects. We make available all R scripts on customized functions that were written to curate the data.
- “m - nhanes\_1988\_2018.R” shows how we used the customized functions (i.e. our pipeline) to curate the original NHANES data.
Example starter codes: The set of starter code to help users conduct exposome analysis consists of four R markdown files (.Rmd). We recommend going through the tutorials in order.
- “example\_0 - merge\_datasets\_together.Rmd” demonstrates how to merge the curated NHANES datasets together.
- “example\_1 - account\_for\_nhanes_design.Rmd” demonstrates how to conduct a linear regression model, a survey-weighted regression model, a Cox proportional hazard model, and a survey-weighted Cox proportional hazard model.
- “example\_2 - calculate\_summary\_statistics.Rmd” demonstrates how to calculate summary statistics for one variable and multiple variables with and without accounting for the NHANES sampling design.
- “example\_3 - run\_multiple\_regressions.Rmd” demonstrates how run multiple regression models with and without adjusting for the sampling design. | nguyenvy/cleaned_nhanes_1988_2018 | [
"license:cc-by-4.0",
"doi:10.57967/hf/0260",
"region:us"
] | 2023-01-02T20:50:25+00:00 | {"license": "cc-by-4.0"} | 2023-07-27T15:28:51+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #doi-10.57967/hf/0260 #region-us
|
Description: The National Health and Nutrition Examination Survey (NHANES) provides data and have considerable potential to study the health and environmental exposure of the non-institutionalized US population. However, as NHANES data are plagued with multiple inconsistencies, processing these data is required before deriving new insights through large-scale analyses. Thus, we developed a set of curated and unified datasets by merging 614 separate files and harmonizing unrestricted data across NHANES III (1988-1994) and Continuous (1999-2018), totaling 135,310 participants and 5,078 variables. The variables convey
1. demographics (281 variables),
2. dietary consumption (324 variables),
3. physiological functions (1,027 variables),
4. occupation (61 variables),
5. questionnaires (1444 variables, e.g., physical activity, medical conditions, diabetes, reproductive health, blood pressure and cholesterol, early childhood),
6. medications (29 variables),
7. mortality information linked from the National Death Index (15 variables),
8. survey weights (857 variables),
9. environmental exposure biomarker measurements (598 variables), and
10. chemical comments indicating which measurements are below or above the lower limit of detection (505 variables).
csv Data Record: The curated NHANES datasets and the data dictionaries includes 23 .csv files and 1 excel file.
- The curated NHANES datasets involves 20 .csv formatted files, two for each module with one as the uncleaned version and the other as the cleaned version. The modules are labeled as the following: 1) mortality, 2) dietary, 3) demographics, 4) response, 5) medications, 6) questionnaire, 7) chemicals, 8) occupation, 9) weights, and 10) comments.
- "dictionary\_nhanes.csv" is a dictionary that lists the variable name, description, module, category, units, CAS Number, comment use, chemical family, chemical family shortened, number of measurements, and cycles available for all 5,078 variables in NHANES.
- "dictionary\_harmonized\_categories.csv" contains the harmonized categories for the categorical variables.
- “dictionary\_drug\_codes.csv” contains the dictionary for descriptors on the drugs codes.
- “nhanes\_inconsistencies\_documentation.xlsx” is an excel file that contains the cleaning documentation, which records all the inconsistencies for all affected variables to help curate each of the NHANES modules.
R Data Record: For researchers who want to conduct their analysis in the R programming language, only cleaned NHANES modules and the data dictionaries can be downloaded as a .zip file which include an .RData file and an .R file.
- “w - nhanes_1988\_2018.RData” contains all the aforementioned datasets as R data objects. We make available all R scripts on customized functions that were written to curate the data.
- “m - nhanes\_1988\_2018.R” shows how we used the customized functions (i.e. our pipeline) to curate the original NHANES data.
Example starter codes: The set of starter code to help users conduct exposome analysis consists of four R markdown files (.Rmd). We recommend going through the tutorials in order.
- “example\_0 - merge\_datasets\_together.Rmd” demonstrates how to merge the curated NHANES datasets together.
- “example\_1 - account\_for\_nhanes_design.Rmd” demonstrates how to conduct a linear regression model, a survey-weighted regression model, a Cox proportional hazard model, and a survey-weighted Cox proportional hazard model.
- “example\_2 - calculate\_summary\_statistics.Rmd” demonstrates how to calculate summary statistics for one variable and multiple variables with and without accounting for the NHANES sampling design.
- “example\_3 - run\_multiple\_regressions.Rmd” demonstrates how run multiple regression models with and without adjusting for the sampling design. | [] | [
"TAGS\n#license-cc-by-4.0 #doi-10.57967/hf/0260 #region-us \n"
] |
314a8abe30c8274c771070e27daffeb00a8ac76a |
# Yor Forger Character Embedding / Textual Inversion
<img alt="Showcase" src="https://huggingface.co/datasets/Nerfgun3/yor_forger/resolve/main/yor_forger_showcase.png"/>
## Disclaimer
This is an embedding based on the Anime Character Yor Forger from Spy x Family
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt: ```"yor_forger"```
Personally, I would recommend to use my embeddings with a strength of 0.8, like ```"(yor_forger:0.8)"```, but in this case the embedding basically works on almost all strength.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/yor_forger | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"image-to-image",
"region:us"
] | 2023-01-02T21:02:24+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Nerfgun3/yor_forger/resolve/main/yor_forger_showcase.png", "tags": ["stable-diffusion", "text-to-image", "image-to-image"], "inference": false} | 2023-01-02T21:08:45+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us
|
# Yor Forger Character Embedding / Textual Inversion
<img alt="Showcase" src="URL
## Disclaimer
This is an embedding based on the Anime Character Yor Forger from Spy x Family
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt:
Personally, I would recommend to use my embeddings with a strength of 0.8, like , but in this case the embedding basically works on almost all strength.
I hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: "Nerfgun3#7508"
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here | [
"# Yor Forger Character Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL",
"## Disclaimer\n\nThis is an embedding based on the Anime Character Yor Forger from Spy x Family",
"## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like , but in this case the embedding basically works on almost all strength.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here"
] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #image-to-image #region-us \n",
"# Yor Forger Character Embedding / Textual Inversion\n\n<img alt=\"Showcase\" src=\"URL",
"## Disclaimer\n\nThis is an embedding based on the Anime Character Yor Forger from Spy x Family",
"## Usage\n\nTo use this embedding you have to download the file aswell as drop it into the \"\\stable-diffusion-webui\\embeddings\" folder\n\nTo use it in a prompt: \n\nPersonally, I would recommend to use my embeddings with a strength of 0.8, like , but in this case the embedding basically works on almost all strength.\n\nI hope you enjoy the embedding. If you have any questions, you can ask me anything via Discord: \"Nerfgun3#7508\"",
"## License\n\nThis embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.\nThe CreativeML OpenRAIL License specifies: \n\n1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content \n2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license\n3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)\nPlease read the full license here"
] |
fb24e6030d3545667be53171fc8c296848bf07da | Source : https://github.com/allisonhorst/palmerpenguins
Data originally published in :
Gorman KB, Williams TD, Fraser WR (2014). Ecological sexual dimorphism and environmental variability within a community of Antarctic penguins (genus Pygoscelis). PLoS ONE 9(3):e90081. https://doi.org/10.1371/journal.pone.0090081 | methodidacte/penguins | [
"license:unknown",
"region:us"
] | 2023-01-02T21:29:37+00:00 | {"license": "unknown"} | 2023-01-02T21:38:31+00:00 | [] | [] | TAGS
#license-unknown #region-us
| Source : URL
Data originally published in :
Gorman KB, Williams TD, Fraser WR (2014). Ecological sexual dimorphism and environmental variability within a community of Antarctic penguins (genus Pygoscelis). PLoS ONE 9(3):e90081. URL | [] | [
"TAGS\n#license-unknown #region-us \n"
] |
da0e30d826cc12c73c21cded75aecf1e30410d11 |
Dataset of Goya Paintings | BirdL/Goya-Dataset | [
"license:other",
"region:us"
] | 2023-01-02T22:19:48+00:00 | {"license": "other"} | 2023-01-07T20:48:04+00:00 | [] | [] | TAGS
#license-other #region-us
|
Dataset of Goya Paintings | [] | [
"TAGS\n#license-other #region-us \n"
] |
cb454d8fb5ee6d9bc82a836395f85553987f87d5 | # Dataset Card for "t5-small-october-wikipedia-2022-tokenized-512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/t5-small-october-wikipedia-2022-tokenized-512 | [
"region:us"
] | 2023-01-02T23:03:59+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 30029601900, "num_examples": 9737225}], "download_size": 9411819822, "dataset_size": 30029601900}} | 2023-01-02T23:17:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "t5-small-october-wikipedia-2022-tokenized-512"
More Information needed | [
"# Dataset Card for \"t5-small-october-wikipedia-2022-tokenized-512\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"t5-small-october-wikipedia-2022-tokenized-512\"\n\nMore Information needed"
] |
9c8ca9cb5f1f6a7454465edd2c1a53dea3eb9298 | # Dataset Card for "bookcorpus_small_compact_1024_n7"
448 samples after explode graphs
`gdown 13QYq8op5XHlhL_qvdQbpYxo-pR5uAwcO` to download the assciated graph pickle
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_small_compact_1024_n7 | [
"region:us"
] | 2023-01-03T00:07:49+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 81072, "num_examples": 7}], "download_size": 42603, "dataset_size": 81072}} | 2023-01-30T19:12:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bookcorpus_small_compact_1024_n7"
448 samples after explode graphs
'gdown 13QYq8op5XHlhL_qvdQbpYxo-pR5uAwcO' to download the assciated graph pickle
More Information needed | [
"# Dataset Card for \"bookcorpus_small_compact_1024_n7\"\n\n448 samples after explode graphs\n\n'gdown 13QYq8op5XHlhL_qvdQbpYxo-pR5uAwcO' to download the assciated graph pickle\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bookcorpus_small_compact_1024_n7\"\n\n448 samples after explode graphs\n\n'gdown 13QYq8op5XHlhL_qvdQbpYxo-pR5uAwcO' to download the assciated graph pickle\n\nMore Information needed"
] |
509d0127abfb348abb94175a5cf59bef7199f9b0 | # Dataset Card for "bookcorpus_small_compact_1024_shard0_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_small_compact_1024_n7_meta | [
"region:us"
] | 2023-01-03T00:21:29+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}, {"name": "cid_arrangement", "sequence": "int32"}, {"name": "schema_lengths", "sequence": "int64"}, {"name": "topic_entity_mask", "sequence": "int64"}, {"name": "text_lengths", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 795771, "num_examples": 7}], "download_size": 260012, "dataset_size": 795771}} | 2023-01-05T00:54:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bookcorpus_small_compact_1024_shard0_meta"
More Information needed | [
"# Dataset Card for \"bookcorpus_small_compact_1024_shard0_meta\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bookcorpus_small_compact_1024_shard0_meta\"\n\nMore Information needed"
] |
1e8b886a454125e7c7488630971e012264d8fb9d |
# Dataset Card for Bernice Pre-train Data
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** N/A
- **Repository:** https://github.com/JHU-CLSP/Bernice-Twitter-encoder
- **Paper:** _Bernice: A Multilingual Pre-trained Encoder for Twitter_ at [EMNLP 2022](https://preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.415)
- **Leaderboard:** N/A
- **Point of Contact:** Alexandra DeLucia aadelucia (at) jhu.edu
### Dataset Summary
Tweet IDs for the 2.5 billion multilingual tweets used to train Bernice, a Twitter encoder.
Read the paper [here](https://preview.aclanthology.org/emnlp-22-ingestion/2022.emnlp-main.415).
The tweets are from the public 1% Twitter API stream from January 2016 to December 2021.
Twitter-provided language metadata is provided with the tweet ID. The data contains 66 unique languages, as identified by [ISO 639 language codes](https://www.wikiwand.com/en/List_of_ISO_639-1_codes), including `und` for undefined languages.
Tweets need to be re-gathered via the Twitter API. We suggest [Hydrator](https://github.com/DocNow/hydrator) or [tweepy](https://www.tweepy.org/).
To load with HuggingFace:
```python
from datasets import load_dataset
dataset = load_dataset("jhu-clsp/bernice-pretrain-data")
for i, row in enumerate(dataset["train"]):
print(row)
if i > 10:
break
```
If you only want Indic languages, use
```python
dataset = load_dataset("jhu-clsp/bernice-pretrain-data", "indic")
```
### Supported Tasks and Leaderboards
N/A
### Languages
65 languages (ISO 639 codes shown below), plus an `und` (undefined) category.
All language identification provided by Twitter API.
| | | | | | | |
|----|-----|----|----|----|-----|----|
| en | ru | ht | zh | bn | ps | lt |
| es | bo | ur | ta | sr | ckb | km |
| pt | it | sv | ro | bg | si | dv |
| ja | th | ca | no | mr | hy | lo |
| ar | de | el | uk | ml | or | ug |
| in | hi | fi | cy | is | pa | |
| ko | pl | cs | ne | te | am | |
| tr | nl | iw | hu | gu | sd | |
| fr | fa | da | eu | kn | my | |
| tl | et | vi | sl | lv | ka | |
## Dataset Structure
### Data Instances
Data is provided in gzip'd files organized by year and month of tweet origin.
Tweets are one per line, with fields separated by tabs.
### Data Fields
* `tweet ID`: ID of tweet
* `lang`: ISO 639 code of language, provided by Twitter metadata. Accuracy of label is not known.
* `year`: Year tweet was created. Year is also provided in the file names.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
Data was gathered to support the training of Bernice, a multilingual pre-trained Twitter encoder.
### Source Data
#### Initial Data Collection and Normalization
Data was gathered via the Twitter API public 1% stream from January 2016 through December 2021.
Tweets with less than three non-username or URL space-delimited words were removed.
All usernames and URLs were replaced with `@USER` and `HTTPURL`, respectively.
#### Who are the source language producers?
Data was produced by users on Twitter.
### Annotations
N/A
### Personal and Sensitive Information
As per Twitter guidelines, only tweet IDs and not full tweets are shared.
Tweets will only be accessible if user has not removed their account (or been banned), tweets were deleted or removed, or a user changed their account access to private.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Dataset gathered and processed by Mark Dredze, Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, and Philip Resnik.
### Licensing Information
MIT
### Citation Information
Please cite the Bernice paper if you use this dataset:
> Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, Philip Resnik, and Mark Dredze. 2022. Bernice: A Multilingual Pre-trained Encoder for Twitter. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6191–6205, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
### Contributions
Dataset uploaded by [@AADeLucia](https://github.com/AADeLucia).
| jhu-clsp/bernice-pretrain-data | [
"task_categories:other",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1B<n<10B",
"source_datasets:original",
"language:en",
"language:es",
"language:pt",
"language:ja",
"language:ar",
"language:in",
"language:ko",
"language:tr",
"language:fr",
"language:tl",
"language:ru",
"language:it",
"language:th",
"language:de",
"language:hi",
"language:pl",
"language:nl",
"language:fa",
"language:et",
"language:ht",
"language:ur",
"language:sv",
"language:ca",
"language:el",
"language:fi",
"language:cs",
"language:iw",
"language:da",
"language:vi",
"language:zh",
"language:ta",
"language:ro",
"language:no",
"language:uk",
"language:cy",
"language:ne",
"language:hu",
"language:eu",
"language:sl",
"language:lv",
"language:lt",
"language:bn",
"language:sr",
"language:bg",
"language:mr",
"language:ml",
"language:is",
"language:te",
"language:gu",
"language:kn",
"language:ps",
"language:ckb",
"language:si",
"language:hy",
"language:or",
"language:pa",
"language:am",
"language:sd",
"language:my",
"language:ka",
"language:km",
"language:dv",
"language:lo",
"language:ug",
"language:bo",
"license:mit",
"twitter",
"slang",
"code switch",
"social",
"social media",
"region:us"
] | 2023-01-03T01:48:26+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en", "es", "pt", "ja", "ar", "in", "ko", "tr", "fr", "tl", "ru", "it", "th", "de", "hi", "pl", "nl", "fa", "et", "ht", "ur", "sv", "ca", "el", "fi", "cs", "iw", "da", "vi", "zh", "ta", "ro", false, "uk", "cy", "ne", "hu", "eu", "sl", "lv", "lt", "bn", "sr", "bg", "mr", "ml", "is", "te", "gu", "kn", "ps", "ckb", "si", "hy", "or", "pa", "am", "sd", "my", "ka", "km", "dv", "lo", "ug", "bo"], "license": ["mit"], "multilinguality": ["multilingual"], "size_categories": ["1B<n<10B"], "source_datasets": ["original"], "task_categories": ["other"], "task_ids": [], "pretty_name": "Bernice Pretrain Data", "tags": ["twitter", "slang", "code switch", "social", "social media"]} | 2023-01-03T21:28:00+00:00 | [] | [
"en",
"es",
"pt",
"ja",
"ar",
"in",
"ko",
"tr",
"fr",
"tl",
"ru",
"it",
"th",
"de",
"hi",
"pl",
"nl",
"fa",
"et",
"ht",
"ur",
"sv",
"ca",
"el",
"fi",
"cs",
"iw",
"da",
"vi",
"zh",
"ta",
"ro",
"no",
"uk",
"cy",
"ne",
"hu",
"eu",
"sl",
"lv",
"lt",
"bn",
"sr",
"bg",
"mr",
"ml",
"is",
"te",
"gu",
"kn",
"ps",
"ckb",
"si",
"hy",
"or",
"pa",
"am",
"sd",
"my",
"ka",
"km",
"dv",
"lo",
"ug",
"bo"
] | TAGS
#task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1B<n<10B #source_datasets-original #language-English #language-Spanish #language-Portuguese #language-Japanese #language-Arabic #language-in #language-Korean #language-Turkish #language-French #language-Tagalog #language-Russian #language-Italian #language-Thai #language-German #language-Hindi #language-Polish #language-Dutch #language-Persian #language-Estonian #language-Haitian #language-Urdu #language-Swedish #language-Catalan #language-Modern Greek (1453-) #language-Finnish #language-Czech #language-iw #language-Danish #language-Vietnamese #language-Chinese #language-Tamil #language-Romanian #language-Norwegian #language-Ukrainian #language-Welsh #language-Nepali (macrolanguage) #language-Hungarian #language-Basque #language-Slovenian #language-Latvian #language-Lithuanian #language-Bengali #language-Serbian #language-Bulgarian #language-Marathi #language-Malayalam #language-Icelandic #language-Telugu #language-Gujarati #language-Kannada #language-Pushto #language-Central Kurdish #language-Sinhala #language-Armenian #language-Oriya (macrolanguage) #language-Panjabi #language-Amharic #language-Sindhi #language-Burmese #language-Georgian #language-Khmer #language-Dhivehi #language-Lao #language-Uighur #language-Tibetan #license-mit #twitter #slang #code switch #social #social media #region-us
| Dataset Card for Bernice Pre-train Data
=======================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: N/A
* Repository: URL
* Paper: *Bernice: A Multilingual Pre-trained Encoder for Twitter* at EMNLP 2022
* Leaderboard: N/A
* Point of Contact: Alexandra DeLucia aadelucia (at) URL
### Dataset Summary
Tweet IDs for the 2.5 billion multilingual tweets used to train Bernice, a Twitter encoder.
Read the paper here.
The tweets are from the public 1% Twitter API stream from January 2016 to December 2021.
Twitter-provided language metadata is provided with the tweet ID. The data contains 66 unique languages, as identified by ISO 639 language codes, including 'und' for undefined languages.
Tweets need to be re-gathered via the Twitter API. We suggest Hydrator or tweepy.
To load with HuggingFace:
If you only want Indic languages, use
### Supported Tasks and Leaderboards
N/A
### Languages
65 languages (ISO 639 codes shown below), plus an 'und' (undefined) category.
All language identification provided by Twitter API.
Dataset Structure
-----------------
### Data Instances
Data is provided in gzip'd files organized by year and month of tweet origin.
Tweets are one per line, with fields separated by tabs.
### Data Fields
* 'tweet ID': ID of tweet
* 'lang': ISO 639 code of language, provided by Twitter metadata. Accuracy of label is not known.
* 'year': Year tweet was created. Year is also provided in the file names.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
Data was gathered to support the training of Bernice, a multilingual pre-trained Twitter encoder.
### Source Data
#### Initial Data Collection and Normalization
Data was gathered via the Twitter API public 1% stream from January 2016 through December 2021.
Tweets with less than three non-username or URL space-delimited words were removed.
All usernames and URLs were replaced with '@USER' and 'HTTPURL', respectively.
#### Who are the source language producers?
Data was produced by users on Twitter.
### Annotations
N/A
### Personal and Sensitive Information
As per Twitter guidelines, only tweet IDs and not full tweets are shared.
Tweets will only be accessible if user has not removed their account (or been banned), tweets were deleted or removed, or a user changed their account access to private.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Dataset gathered and processed by Mark Dredze, Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, and Philip Resnik.
### Licensing Information
MIT
Please cite the Bernice paper if you use this dataset:
>
> Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, Philip Resnik, and Mark Dredze. 2022. Bernice: A Multilingual Pre-trained Encoder for Twitter. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6191–6205, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
>
>
>
### Contributions
Dataset uploaded by @AADeLucia.
| [
"### Dataset Summary\n\n\nTweet IDs for the 2.5 billion multilingual tweets used to train Bernice, a Twitter encoder.\nRead the paper here.\nThe tweets are from the public 1% Twitter API stream from January 2016 to December 2021.\nTwitter-provided language metadata is provided with the tweet ID. The data contains 66 unique languages, as identified by ISO 639 language codes, including 'und' for undefined languages.\nTweets need to be re-gathered via the Twitter API. We suggest Hydrator or tweepy.\n\n\nTo load with HuggingFace:\n\n\nIf you only want Indic languages, use",
"### Supported Tasks and Leaderboards\n\n\nN/A",
"### Languages\n\n\n65 languages (ISO 639 codes shown below), plus an 'und' (undefined) category.\nAll language identification provided by Twitter API.\n\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nData is provided in gzip'd files organized by year and month of tweet origin.\nTweets are one per line, with fields separated by tabs.",
"### Data Fields\n\n\n* 'tweet ID': ID of tweet\n* 'lang': ISO 639 code of language, provided by Twitter metadata. Accuracy of label is not known.\n* 'year': Year tweet was created. Year is also provided in the file names.",
"### Data Splits\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData was gathered to support the training of Bernice, a multilingual pre-trained Twitter encoder.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nData was gathered via the Twitter API public 1% stream from January 2016 through December 2021.\nTweets with less than three non-username or URL space-delimited words were removed.\nAll usernames and URLs were replaced with '@USER' and 'HTTPURL', respectively.",
"#### Who are the source language producers?\n\n\nData was produced by users on Twitter.",
"### Annotations\n\n\nN/A",
"### Personal and Sensitive Information\n\n\nAs per Twitter guidelines, only tweet IDs and not full tweets are shared.\nTweets will only be accessible if user has not removed their account (or been banned), tweets were deleted or removed, or a user changed their account access to private.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDataset gathered and processed by Mark Dredze, Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, and Philip Resnik.",
"### Licensing Information\n\n\nMIT\n\n\nPlease cite the Bernice paper if you use this dataset:\n\n\n\n> \n> Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, Philip Resnik, and Mark Dredze. 2022. Bernice: A Multilingual Pre-trained Encoder for Twitter. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6191–6205, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.\n> \n> \n>",
"### Contributions\n\n\nDataset uploaded by @AADeLucia."
] | [
"TAGS\n#task_categories-other #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1B<n<10B #source_datasets-original #language-English #language-Spanish #language-Portuguese #language-Japanese #language-Arabic #language-in #language-Korean #language-Turkish #language-French #language-Tagalog #language-Russian #language-Italian #language-Thai #language-German #language-Hindi #language-Polish #language-Dutch #language-Persian #language-Estonian #language-Haitian #language-Urdu #language-Swedish #language-Catalan #language-Modern Greek (1453-) #language-Finnish #language-Czech #language-iw #language-Danish #language-Vietnamese #language-Chinese #language-Tamil #language-Romanian #language-Norwegian #language-Ukrainian #language-Welsh #language-Nepali (macrolanguage) #language-Hungarian #language-Basque #language-Slovenian #language-Latvian #language-Lithuanian #language-Bengali #language-Serbian #language-Bulgarian #language-Marathi #language-Malayalam #language-Icelandic #language-Telugu #language-Gujarati #language-Kannada #language-Pushto #language-Central Kurdish #language-Sinhala #language-Armenian #language-Oriya (macrolanguage) #language-Panjabi #language-Amharic #language-Sindhi #language-Burmese #language-Georgian #language-Khmer #language-Dhivehi #language-Lao #language-Uighur #language-Tibetan #license-mit #twitter #slang #code switch #social #social media #region-us \n",
"### Dataset Summary\n\n\nTweet IDs for the 2.5 billion multilingual tweets used to train Bernice, a Twitter encoder.\nRead the paper here.\nThe tweets are from the public 1% Twitter API stream from January 2016 to December 2021.\nTwitter-provided language metadata is provided with the tweet ID. The data contains 66 unique languages, as identified by ISO 639 language codes, including 'und' for undefined languages.\nTweets need to be re-gathered via the Twitter API. We suggest Hydrator or tweepy.\n\n\nTo load with HuggingFace:\n\n\nIf you only want Indic languages, use",
"### Supported Tasks and Leaderboards\n\n\nN/A",
"### Languages\n\n\n65 languages (ISO 639 codes shown below), plus an 'und' (undefined) category.\nAll language identification provided by Twitter API.\n\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nData is provided in gzip'd files organized by year and month of tweet origin.\nTweets are one per line, with fields separated by tabs.",
"### Data Fields\n\n\n* 'tweet ID': ID of tweet\n* 'lang': ISO 639 code of language, provided by Twitter metadata. Accuracy of label is not known.\n* 'year': Year tweet was created. Year is also provided in the file names.",
"### Data Splits\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nData was gathered to support the training of Bernice, a multilingual pre-trained Twitter encoder.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nData was gathered via the Twitter API public 1% stream from January 2016 through December 2021.\nTweets with less than three non-username or URL space-delimited words were removed.\nAll usernames and URLs were replaced with '@USER' and 'HTTPURL', respectively.",
"#### Who are the source language producers?\n\n\nData was produced by users on Twitter.",
"### Annotations\n\n\nN/A",
"### Personal and Sensitive Information\n\n\nAs per Twitter guidelines, only tweet IDs and not full tweets are shared.\nTweets will only be accessible if user has not removed their account (or been banned), tweets were deleted or removed, or a user changed their account access to private.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDataset gathered and processed by Mark Dredze, Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, and Philip Resnik.",
"### Licensing Information\n\n\nMIT\n\n\nPlease cite the Bernice paper if you use this dataset:\n\n\n\n> \n> Alexandra DeLucia, Shijie Wu, Aaron Mueller, Carlos Aguirre, Philip Resnik, and Mark Dredze. 2022. Bernice: A Multilingual Pre-trained Encoder for Twitter. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6191–6205, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.\n> \n> \n>",
"### Contributions\n\n\nDataset uploaded by @AADeLucia."
] |
6f2d3885aed0fc6f467ce40d00373e0f17ba246b | # Dataset Card for "origin_added_korquad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lim4349/origin_added_korquad | [
"region:us"
] | 2023-01-03T02:24:40+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "answers", "struct": [{"name": "text", "sequence": "string"}, {"name": "answer_start", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 83769368, "num_examples": 57923}, {"name": "validation", "num_bytes": 9244735, "num_examples": 6436}], "download_size": 57373216, "dataset_size": 93014103}} | 2023-01-03T02:37:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "origin_added_korquad"
More Information needed | [
"# Dataset Card for \"origin_added_korquad\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"origin_added_korquad\"\n\nMore Information needed"
] |
821d77a9210bf7f1c5f595f8b900e5dd1b422176 | # Dataset Card for "korquad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lim4349/korquad | [
"region:us"
] | 2023-01-03T02:38:32+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "answers", "struct": [{"name": "text", "sequence": "string"}, {"name": "answer_start", "sequence": "int64"}]}], "splits": [{"name": "train", "num_bytes": 75266074, "num_examples": 54366}, {"name": "validation", "num_bytes": 8358264, "num_examples": 6041}], "download_size": 51472501, "dataset_size": 83624338}} | 2023-01-03T02:39:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "korquad"
More Information needed | [
"# Dataset Card for \"korquad\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"korquad\"\n\nMore Information needed"
] |
72a73be2064bae0109a168fb8355fcf4ca3bfe2e | # Dataset Card for "AToMiC-Texts-Mapped"
## Dataset Description
- **Homepage:** [AToMiC homepage](https://trec-atomic.github.io/)
- **Source:** [WIT](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning](https://arxiv.org/abs/2103.01913)
### Languages
This dataset only contains English in Wikipedia (parsed from the 20221101 XML dump).
### Data Instances
Each instance is a section of a Wikipedia page. We also provide its page-level information, and associated information such as categories and media.
The `source_id` can be mapped back to the instance in the original [WIT instance](https://github.com/google-research-datasets/wit/blob/main/DATA.md).
Notice that the WIT dataset is crawled from the earlier version of Wikipedia (2020-08-30).
The WIT dataset is mapped to the new dump by pure BM25 matching with [Anserini](https://github.com/castorini/anserini).
### Intended Usage
1. Text collection for Image-to-Text retrieval
2. Language model pretraining
3. Document classification
### Licensing Information
[CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
TBA
### Acknowledgement
Thanks to:
[mwparserfromhell](https://github.com/earwig/mwparserfromhell)
[Datasets](https://github.com/huggingface/datasets)
[Anserini](https://github.com/castorini/anserini)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TREC-AToMiC/AToMiC-Texts-v0.2 | [
"size_categories:100M<n<1B",
"license:cc-by-sa-4.0",
"arxiv:2103.01913",
"region:us"
] | 2023-01-03T04:29:46+00:00 | {"license": "cc-by-sa-4.0", "size_categories": ["100M<n<1B"], "dataset_info": {"features": [{"name": "text_id", "dtype": "string"}, {"name": "page_url", "dtype": "string"}, {"name": "page_title", "dtype": "string"}, {"name": "section_title", "dtype": "string"}, {"name": "context_page_description", "dtype": "string"}, {"name": "context_section_description", "dtype": "string"}, {"name": "media", "sequence": "string"}, {"name": "hierachy", "sequence": "string"}, {"name": "category", "sequence": "string"}, {"name": "source_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14378574060.336058, "num_examples": 10134744}], "download_size": 6408012391, "dataset_size": 14378574060.336058}} | 2023-02-14T21:30:37+00:00 | [
"2103.01913"
] | [] | TAGS
#size_categories-100M<n<1B #license-cc-by-sa-4.0 #arxiv-2103.01913 #region-us
| # Dataset Card for "AToMiC-Texts-Mapped"
## Dataset Description
- Homepage: AToMiC homepage
- Source: WIT
- Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
### Languages
This dataset only contains English in Wikipedia (parsed from the 20221101 XML dump).
### Data Instances
Each instance is a section of a Wikipedia page. We also provide its page-level information, and associated information such as categories and media.
The 'source_id' can be mapped back to the instance in the original WIT instance.
Notice that the WIT dataset is crawled from the earlier version of Wikipedia (2020-08-30).
The WIT dataset is mapped to the new dump by pure BM25 matching with Anserini.
### Intended Usage
1. Text collection for Image-to-Text retrieval
2. Language model pretraining
3. Document classification
### Licensing Information
CC BY-SA 4.0 international license
TBA
### Acknowledgement
Thanks to:
mwparserfromhell
Datasets
Anserini
More Information needed | [
"# Dataset Card for \"AToMiC-Texts-Mapped\"",
"## Dataset Description\n\n- Homepage: AToMiC homepage\n- Source: WIT\n- Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning",
"### Languages\n\nThis dataset only contains English in Wikipedia (parsed from the 20221101 XML dump).",
"### Data Instances\n\nEach instance is a section of a Wikipedia page. We also provide its page-level information, and associated information such as categories and media.\nThe 'source_id' can be mapped back to the instance in the original WIT instance.\nNotice that the WIT dataset is crawled from the earlier version of Wikipedia (2020-08-30).\nThe WIT dataset is mapped to the new dump by pure BM25 matching with Anserini.",
"### Intended Usage\n\n1. Text collection for Image-to-Text retrieval\n2. Language model pretraining\n3. Document classification",
"### Licensing Information\n\nCC BY-SA 4.0 international license\n\n\n\nTBA",
"### Acknowledgement\n\nThanks to:\nmwparserfromhell\nDatasets\nAnserini\n\nMore Information needed"
] | [
"TAGS\n#size_categories-100M<n<1B #license-cc-by-sa-4.0 #arxiv-2103.01913 #region-us \n",
"# Dataset Card for \"AToMiC-Texts-Mapped\"",
"## Dataset Description\n\n- Homepage: AToMiC homepage\n- Source: WIT\n- Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning",
"### Languages\n\nThis dataset only contains English in Wikipedia (parsed from the 20221101 XML dump).",
"### Data Instances\n\nEach instance is a section of a Wikipedia page. We also provide its page-level information, and associated information such as categories and media.\nThe 'source_id' can be mapped back to the instance in the original WIT instance.\nNotice that the WIT dataset is crawled from the earlier version of Wikipedia (2020-08-30).\nThe WIT dataset is mapped to the new dump by pure BM25 matching with Anserini.",
"### Intended Usage\n\n1. Text collection for Image-to-Text retrieval\n2. Language model pretraining\n3. Document classification",
"### Licensing Information\n\nCC BY-SA 4.0 international license\n\n\n\nTBA",
"### Acknowledgement\n\nThanks to:\nmwparserfromhell\nDatasets\nAnserini\n\nMore Information needed"
] |
2f1a77540906c5230aad77cb8b24f8e510024426 |
AI generated images that have relatively obvious issues
target tag: bad anatomy | trojblue/bad_ai | [
"license:gpl",
"region:us"
] | 2023-01-03T05:18:06+00:00 | {"license": "gpl"} | 2023-03-13T00:58:12+00:00 | [] | [] | TAGS
#license-gpl #region-us
|
AI generated images that have relatively obvious issues
target tag: bad anatomy | [] | [
"TAGS\n#license-gpl #region-us \n"
] |
be60029f2bc1489690db6eb64d92dffa30f7797c | boys <3 | gweg/boys | [
"region:us"
] | 2023-01-03T06:45:26+00:00 | {"pretty_name": "Game boys genus male "} | 2023-04-14T17:57:18+00:00 | [] | [] | TAGS
#region-us
| boys <3 | [] | [
"TAGS\n#region-us \n"
] |
1a5624ce04940147b55612539e157937b1e577d4 |
# Dataset Card for MNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://yann.lecun.com/exdb/mnist/
- **Repository:**
- **Paper:** MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-mnist).
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its label:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x276021F6DD8>,
'label': 5
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
- `label`: an integer between 0 and 9 representing the digit.
### Data Splits
The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.
The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.
### Source Data
#### Initial Data Collection and Normalization
The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
#### Who are the source language producers?
Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.
### Annotations
#### Annotation process
The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.
#### Who are the annotators?
Same as the source data creators.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Chris Burges, Corinna Cortes and Yann LeCun
### Licensing Information
MIT Licence
### Citation Information
```
@article{lecun2010mnist,
title={MNIST handwritten digit database},
author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
volume={2},
year={2010}
}
```
### Contributions
Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset. | mqddb/test-dataset | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:extended|other-nist",
"language:en",
"license:mit",
"region:us"
] | 2023-01-03T06:54:16+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-nist"], "task_categories": ["image-classification"], "task_ids": ["multi-class-image-classification"], "paperswithcode_id": "mnist", "pretty_name": "MNIST", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9"}}}}], "config_name": "mnist", "splits": [{"name": "train", "num_bytes": 17470848, "num_examples": 60000}, {"name": "test", "num_bytes": 2916440, "num_examples": 10000}], "download_size": 11594722, "dataset_size": 20387288}} | 2023-01-03T07:08:03+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-nist #language-English #license-mit #region-us
|
# Dataset Card for MNIST
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper: MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges
- Leaderboard:
- Point of Contact:
### Dataset Summary
The MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.
Half of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).
### Supported Tasks and Leaderboards
- 'image-classification': The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available here.
### Languages
English
## Dataset Structure
### Data Instances
A data point comprises an image and its label:
### Data Fields
- 'image': A 'PIL.Image.Image' object containing the 28x28 image. Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '"image"' column, *i.e.* 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'
- 'label': an integer between 0 and 9 representing the digit.
### Data Splits
The data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
The MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.
The goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.
### Source Data
#### Initial Data Collection and Normalization
The original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
#### Who are the source language producers?
Half of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.
### Annotations
#### Annotation process
The images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.
#### Who are the annotators?
Same as the source data creators.
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Chris Burges, Corinna Cortes and Yann LeCun
### Licensing Information
MIT Licence
### Contributions
Thanks to @sgugger for adding this dataset. | [
"# Dataset Card for MNIST",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThe MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.\nHalf of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).",
"### Supported Tasks and Leaderboards\n\n- 'image-classification': The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available here.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nA data point comprises an image and its label:",
"### Data Fields\n\n- 'image': A 'PIL.Image.Image' object containing the 28x28 image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'label': an integer between 0 and 9 representing the digit.",
"### Data Splits\n\nThe data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.",
"## Dataset Creation",
"### Curation Rationale\n\nThe MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.\nThe goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.",
"#### Who are the source language producers?\n\nHalf of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.",
"### Annotations",
"#### Annotation process\n\nThe images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.",
"#### Who are the annotators?\n\nSame as the source data creators.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nChris Burges, Corinna Cortes and Yann LeCun",
"### Licensing Information\n\nMIT Licence",
"### Contributions\n\nThanks to @sgugger for adding this dataset."
] | [
"TAGS\n#task_categories-image-classification #task_ids-multi-class-image-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-nist #language-English #license-mit #region-us \n",
"# Dataset Card for MNIST",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: MNIST handwritten digit database by Yann LeCun, Corinna Cortes, and CJ Burges\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThe MNIST dataset consists of 70,000 28x28 black-and-white images of handwritten digits extracted from two NIST databases. There are 60,000 images in the training dataset and 10,000 images in the validation dataset, one class per digit so a total of 10 classes, with 7,000 images (6,000 train images and 1,000 test images) per class.\nHalf of the image were drawn by Census Bureau employees and the other half by high school students (this split is evenly distributed in the training and testing sets).",
"### Supported Tasks and Leaderboards\n\n- 'image-classification': The goal of this task is to classify a given image of a handwritten digit into one of 10 classes representing integer values from 0 to 9, inclusively. The leaderboard is available here.",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances\n\nA data point comprises an image and its label:",
"### Data Fields\n\n- 'image': A 'PIL.Image.Image' object containing the 28x28 image. Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the '\"image\"' column, *i.e.* 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'\n- 'label': an integer between 0 and 9 representing the digit.",
"### Data Splits\n\nThe data is split into training and test set. All the images in the test set were drawn by different individuals than the images in the training set. The training set contains 60,000 images and the test set 10,000 images.",
"## Dataset Creation",
"### Curation Rationale\n\nThe MNIST database was created to provide a testbed for people wanting to try pattern recognition methods or machine learning algorithms while spending minimal efforts on preprocessing and formatting. Images of the original dataset (NIST) were in two groups, one consisting of images drawn by Census Bureau employees and one consisting of images drawn by high school students. In NIST, the training set was built by grouping all the images of the Census Bureau employees, and the test set was built by grouping the images form the high school students.\nThe goal in building MNIST was to have a training and test set following the same distributions, so the training set contains 30,000 images drawn by Census Bureau employees and 30,000 images drawn by high school students, and the test set contains 5,000 images of each group. The curators took care to make sure all the images in the test set were drawn by different individuals than the images in the training set.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nThe original images from NIST were size normalized to fit a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels (i.e., pixels don't simply have a value of black and white, but a level of greyness from 0 to 255) as a result of the anti-aliasing technique used by the normalization algorithm. The images were then centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.",
"#### Who are the source language producers?\n\nHalf of the source images were drawn by Census Bureau employees, half by high school students. According to the dataset curator, the images from the first group are more easily recognizable.",
"### Annotations",
"#### Annotation process\n\nThe images were not annotated after their creation: the image creators annotated their images with the corresponding label after drawing them.",
"#### Who are the annotators?\n\nSame as the source data creators.",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nChris Burges, Corinna Cortes and Yann LeCun",
"### Licensing Information\n\nMIT Licence",
"### Contributions\n\nThanks to @sgugger for adding this dataset."
] |
4d6b88706fed4d253c7e73e23d36ec4a3570387f | # Dataset Card for "sv_corpora_parliament_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hellosimple/sv_corpora_parliament_processed | [
"region:us"
] | 2023-01-03T09:00:09+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 292351437, "num_examples": 1892723}], "download_size": 161955537, "dataset_size": 292351437}} | 2023-01-03T09:09:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sv_corpora_parliament_processed"
More Information needed | [
"# Dataset Card for \"sv_corpora_parliament_processed\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sv_corpora_parliament_processed\"\n\nMore Information needed"
] |
ac23016f92d5a58164e09b94825242d0422a3018 | # Dataset Card for "sidewalk-imagery2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | chiHang/sidewalk-imagery2 | [
"region:us"
] | 2023-01-03T09:21:27+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3138386.0, "num_examples": 10}], "download_size": 3139599, "dataset_size": 3138386.0}} | 2023-01-03T09:21:31+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sidewalk-imagery2"
More Information needed | [
"# Dataset Card for \"sidewalk-imagery2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sidewalk-imagery2\"\n\nMore Information needed"
] |
296c87b74745521851107b27f37bc08585eab51f |
<h1>Afriqa Prebuilt Indices</h1>
Prebuilt Lucene Inverted Indices for preprocessed Afriqa Wikipedia Passages | masakhane/afriqa-prebuilt-sparse-indexes | [
"task_categories:text-retrieval",
"size_categories:100K<n<1M",
"language:en",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2023-01-03T12:07:03+00:00 | {"language": ["en", "fr"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-retrieval"], "pretty_name": "Afriqa Wikipedia 100 Inverted Indices"} | 2023-03-31T16:29:39+00:00 | [] | [
"en",
"fr"
] | TAGS
#task_categories-text-retrieval #size_categories-100K<n<1M #language-English #language-French #license-apache-2.0 #region-us
|
<h1>Afriqa Prebuilt Indices</h1>
Prebuilt Lucene Inverted Indices for preprocessed Afriqa Wikipedia Passages | [] | [
"TAGS\n#task_categories-text-retrieval #size_categories-100K<n<1M #language-English #language-French #license-apache-2.0 #region-us \n"
] |
93ed975421b3189ca3af9ee7059d413144d8f694 | # AutoTrain Dataset for project: exact_data
## Dataset Description
This dataset has been automatically processed by AutoTrain for project exact_data.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "What is the maximum vendor id of vendor present in vendor table who has been issued a PO in 2021",
"target": "select max(t1.vendor_id) from RETAILBUYER_POHEADER as t2 inner join RETAILBUYER_VENDOR as t1 on t2.vendor_id = t1.vendor_id where YEAR(t2.po_issuedt) = 2021"
},
{
"text": "What are the product ids, descriptions and sum of quantities ordered for the products in purchase order line items",
"target": "select L.product_id, t2.product_desc, sum(t1.quantity) from RETAILBUYER_PRODUCT_SOURCE as t2 INNER JOIN RETAILBUYER_POLINEITEM as t1 ON t2.PRODUCT_ID = t1.PRODUCT_ID GROUP BY t1.PRODUCT_ID, t2.product_desc"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 25 |
| valid | 7 |
| Aman6917/autotrain-data-exact_data | [
"task_categories:summarization",
"region:us"
] | 2023-01-03T12:34:52+00:00 | {"task_categories": ["summarization"]} | 2023-01-03T12:42:34+00:00 | [] | [] | TAGS
#task_categories-summarization #region-us
| AutoTrain Dataset for project: exact\_data
==========================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project exact\_data.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-summarization #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
d12b7c878224d4773ecb04596ba9cf0e9a499be8 | # AutoTrain Dataset for project: tm3_model
## Dataset Description
This dataset has been automatically processed by AutoTrain for project tm3_model.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "List all PO headers with a valid vendor record in database",
"target": "select * from RETAILBUYER_POHEADER as t2 inner join RETAILBUYER_VENDOR as t1 on t2.VENDOR_ID = t1.VENDOR_ID"
},
{
"text": "List all details of PO headers which have a vendor in vendor table",
"target": "select * from RETAILBUYER_POHEADER as t2 inner join RETAILBUYER_VENDOR as t1 on t2.VENDOR_ID = t1.VENDOR_ID"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 49 |
| valid | 17 |
| Aman6917/autotrain-data-tm3_model | [
"task_categories:summarization",
"region:us"
] | 2023-01-03T12:47:41+00:00 | {"task_categories": ["summarization"]} | 2023-01-03T12:52:49+00:00 | [] | [] | TAGS
#task_categories-summarization #region-us
| AutoTrain Dataset for project: tm3\_model
=========================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project tm3\_model.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-summarization #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
3aafb46d72915a2378592678035999d8935e3bff | # Dataset Card for "medspeech3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arnepeine/medspeech3 | [
"region:us"
] | 2023-01-03T13:27:23+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2290519.0, "num_examples": 24}], "download_size": 0, "dataset_size": 2290519.0}} | 2023-01-03T15:07:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "medspeech3"
More Information needed | [
"# Dataset Card for \"medspeech3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"medspeech3\"\n\nMore Information needed"
] |
b9c7b76cbb634a3e7b59e7e055eeada03bf3b8dc | # Dataset Card for "test_dev"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pyakymenko/test_dev | [
"region:us"
] | 2023-01-03T14:56:11+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 57651.0, "num_examples": 2}], "download_size": 51674, "dataset_size": 57651.0}} | 2023-01-04T17:20:42+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_dev"
More Information needed | [
"# Dataset Card for \"test_dev\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_dev\"\n\nMore Information needed"
] |
f926107d762b1fb99e9b7f936541d1281d55d26d | # Dataset Card for "test-github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ppl418/test-github-issues | [
"region:us"
] | 2023-01-03T15:01:53+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "dtype": "null"}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "merged_at", "dtype": "timestamp[s]"}]}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 410377, "num_examples": 100}], "download_size": 183986, "dataset_size": 410377}} | 2023-01-03T15:01:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test-github-issues"
More Information needed | [
"# Dataset Card for \"test-github-issues\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test-github-issues\"\n\nMore Information needed"
] |
741e0c378ea81209c16803672db9cf5c51d4093a | # Dataset Card for "wikipedia_id_20230101"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cahya/wikipedia_id_20230101 | [
"region:us"
] | 2023-01-03T16:04:05+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1051737365, "num_examples": 634559}], "download_size": 544132473, "dataset_size": 1051737365}} | 2023-01-03T16:04:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wikipedia_id_20230101"
More Information needed | [
"# Dataset Card for \"wikipedia_id_20230101\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikipedia_id_20230101\"\n\nMore Information needed"
] |
5975141ea711bdf75d6d528989d3169863dc1239 | # Dataset Card for "beautiful_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_data | [
"region:us"
] | 2023-01-03T16:14:30+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}, {"name": "dataset_identifier", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2130074097.4420476, "num_examples": 2795}, {"name": "test", "num_bytes": 237013611.55795234, "num_examples": 311}], "download_size": 2367106825, "dataset_size": 2367087709.0}} | 2023-01-11T14:33:37+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_data"
More Information needed | [
"# Dataset Card for \"beautiful_data\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_data\"\n\nMore Information needed"
] |
0b1d3e63ee735a36303025f197168de134dc530e | # AutoTrain Dataset for project: copcar
## Dataset Description
This dataset has been automatically processed by AutoTrain for project copcar.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<246x360 RGB PIL image>",
"target": 0
},
{
"image": "<128x128 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['VehiclesNepal1', 'police_car'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 992 |
| valid | 248 |
### Citation Information
```
@misc {leroy_t_brenneman_2023,
author = { {leroy T brenneman} },
title = { autotrain-data-copcar (Revision ebeca60) },
year = 2023,
url = { https://huggingface.co/datasets/gatman666/autotrain-data-copcar },
doi = { 10.57967/hf/0243 },
publisher = { Hugging Face }
}
```
| gatman666/autotrain-data-copcar | [
"task_categories:image-classification",
"doi:10.57967/hf/0243",
"region:us"
] | 2023-01-03T17:57:15+00:00 | {"task_categories": ["image-classification"]} | 2023-03-01T21:55:26+00:00 | [] | [] | TAGS
#task_categories-image-classification #doi-10.57967/hf/0243 #region-us
| AutoTrain Dataset for project: copcar
=====================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project copcar.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-image-classification #doi-10.57967/hf/0243 #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
0ef727f48d5f8bc2133a1b21e522458bd6f9e06f | # Dataset Card for "sample_dataset_ts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | BhavyaMuni/sample_dataset_ts | [
"region:us"
] | 2023-01-03T18:07:28+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 633903, "num_examples": 3445}], "download_size": 256343, "dataset_size": 633903}} | 2023-01-03T18:07:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sample_dataset_ts"
More Information needed | [
"# Dataset Card for \"sample_dataset_ts\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sample_dataset_ts\"\n\nMore Information needed"
] |
4287dd5e60f84797248b5a0723582b82bae7a5bd | # Dataset Card for "dfl_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ManuD/dfl_classification | [
"region:us"
] | 2023-01-03T18:29:13+00:00 | {"dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "video_id", "dtype": "string"}, {"name": "time", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "label", "dtype": "int32"}, {"name": "label_str", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8428385682.6, "num_examples": 244497}], "download_size": 8405174528, "dataset_size": 8428385682.6}} | 2023-01-05T22:21:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dfl_classification"
More Information needed | [
"# Dataset Card for \"dfl_classification\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dfl_classification\"\n\nMore Information needed"
] |
e858c85e98b8eee9ea6bb9a9911c2a4d407e38cb | # Dataset Card for "helloworld"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Taeyoung/helloworld | [
"region:us"
] | 2023-01-03T18:46:38+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 2080649.0, "num_examples": 6}], "download_size": 0, "dataset_size": 2080649.0}} | 2023-01-03T19:16:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "helloworld"
More Information needed | [
"# Dataset Card for \"helloworld\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"helloworld\"\n\nMore Information needed"
] |
b13f7ed769444c5b010034ed8ac5f0f0b6c87af9 |
# Spanish Books
## Dataset Description
- **Total of books:** 87,967
### Dataset Summary
Dataset of books in Spanish crawled from web and torrents.
### Preprocessing
Preprocessing performed by [spanish_nlp](https://github.com/jorgeortizfuentes/spanish_nlp).
### Licensing Information
The dataset is available under the [Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).
Some books may be subject to copyright. Use for academic purposes only.
### Citation Information
```
@misc{ortiz2022esbooks,
title={Crawled Spanish Books},
author={Jorge Ortiz-Fuentes},
year={2022},
publisher= {Hugging Face}
}
```
| jorgeortizfuentes/spanish_books | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-01-03T20:50:24+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["es"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "SpanishBooks", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40822979419, "num_examples": 87967}], "download_size": 25042031556, "dataset_size": 40822979419}} | 2023-01-03T21:21:44+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #license-cc-by-sa-4.0 #region-us
|
# Spanish Books
## Dataset Description
- Total of books: 87,967
### Dataset Summary
Dataset of books in Spanish crawled from web and torrents.
### Preprocessing
Preprocessing performed by spanish_nlp.
### Licensing Information
The dataset is available under the Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0).
Some books may be subject to copyright. Use for academic purposes only.
| [
"# Spanish Books",
"## Dataset Description\n\n- Total of books: 87,967",
"### Dataset Summary\n\n Dataset of books in Spanish crawled from web and torrents.",
"### Preprocessing\n\nPreprocessing performed by spanish_nlp.",
"### Licensing Information\n\nThe dataset is available under the Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0).\n\nSome books may be subject to copyright. Use for academic purposes only."
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #license-cc-by-sa-4.0 #region-us \n",
"# Spanish Books",
"## Dataset Description\n\n- Total of books: 87,967",
"### Dataset Summary\n\n Dataset of books in Spanish crawled from web and torrents.",
"### Preprocessing\n\nPreprocessing performed by spanish_nlp.",
"### Licensing Information\n\nThe dataset is available under the Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0).\n\nSome books may be subject to copyright. Use for academic purposes only."
] |
d93f31174df641b5d31508e1e3b0708460f18fcb | # Dataset Card for "kratos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | matteopilotto/kratos | [
"region:us"
] | 2023-01-03T21:33:19+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 10082811.0, "num_examples": 10}], "download_size": 10084661, "dataset_size": 10082811.0}} | 2023-01-04T07:08:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "kratos"
More Information needed | [
"# Dataset Card for \"kratos\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"kratos\"\n\nMore Information needed"
] |
22679caa9c01eff73bd1f02334c28112d91f4079 | # Dataset Card for "OCT_balanced"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MauroLeidi/OCT_balanced | [
"region:us"
] | 2023-01-03T21:34:57+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "DRUSEN", "1": "NORMAL"}}}}], "splits": [{"name": "train", "num_bytes": 1037539349.736, "num_examples": 17232}, {"name": "test", "num_bytes": 21771538.0, "num_examples": 500}], "download_size": 1080333714, "dataset_size": 1059310887.736}} | 2023-01-03T22:00:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "OCT_balanced"
More Information needed | [
"# Dataset Card for \"OCT_balanced\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"OCT_balanced\"\n\nMore Information needed"
] |
cc96c26810a89d329fdaabeef7b3ad266f73da3e | # AutoTrain Dataset for project: police-identifier
## Dataset Description
This dataset has been automatically processed by AutoTrain for project police-identifier.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<246x360 RGB PIL image>",
"target": 0
},
{
"image": "<128x128 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['VehiclesNepal1', 'police_car'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 992 |
| valid | 248 |
| gatman666/autotrain-data-police-identifier | [
"task_categories:image-classification",
"region:us"
] | 2023-01-03T22:20:20+00:00 | {"task_categories": ["image-classification"]} | 2023-01-03T22:48:29+00:00 | [] | [] | TAGS
#task_categories-image-classification #region-us
| AutoTrain Dataset for project: police-identifier
================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project police-identifier.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
47a44c793eeed725649bcd39cc7a6ec986b4904d | # Dataset Card for "GMTK-Transcripts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Description
Transcripts generated by Whisper's large model of [GMTK's channel](https://www.youtube.com/channel/UCqJ-Xo29CKyLTjn6z2XwYAw) | taesiri/GMTK-Transcripts | [
"region:us"
] | 2023-01-04T01:56:30+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2490682, "num_examples": 36120}], "download_size": 1595636, "dataset_size": 2490682}} | 2023-01-04T02:00:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "GMTK-Transcripts"
More Information needed
## Description
Transcripts generated by Whisper's large model of GMTK's channel | [
"# Dataset Card for \"GMTK-Transcripts\"\n\nMore Information needed",
"## Description\n\nTranscripts generated by Whisper's large model of GMTK's channel"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"GMTK-Transcripts\"\n\nMore Information needed",
"## Description\n\nTranscripts generated by Whisper's large model of GMTK's channel"
] |
cc8df0f224cb1a9353a92d959b095a1a1c233068 | sdfs | Sushmit/diffMed | [
"region:us"
] | 2023-01-04T02:26:17+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2346045522.71, "num_examples": 89395}], "download_size": 2318135039, "dataset_size": 2346045522.71}} | 2023-03-13T11:59:35+00:00 | [] | [] | TAGS
#region-us
| sdfs | [] | [
"TAGS\n#region-us \n"
] |
585ba19c42a7c3a56d703678d289f449de4e85eb |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Hacker news until 2015 with comments. Collect from Google BigQuery open dataset. We didn't do any pre-processing except remove HTML tags.
### Supported Tasks and Leaderboards
Comment Generation; News analysis with comments; Other comment-based NLP tasks.
### Languages
English
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | Linkseed/hacker_news_with_comments | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:afl-3.0",
"CommentGenerate",
"region:us"
] | 2023-01-04T06:19:34+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": ["en"], "license": ["afl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "hacker_news_with_comments ", "tags": ["CommentGenerate"]} | 2023-01-06T05:44:10+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-afl-3.0 #CommentGenerate #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
Hacker news until 2015 with comments. Collect from Google BigQuery open dataset. We didn't do any pre-processing except remove HTML tags.
### Supported Tasks and Leaderboards
Comment Generation; News analysis with comments; Other comment-based NLP tasks.
### Languages
English
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nHacker news until 2015 with comments. Collect from Google BigQuery open dataset. We didn't do any pre-processing except remove HTML tags.",
"### Supported Tasks and Leaderboards\n\nComment Generation; News analysis with comments; Other comment-based NLP tasks.",
"### Languages\n\nEnglish",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-afl-3.0 #CommentGenerate #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nHacker news until 2015 with comments. Collect from Google BigQuery open dataset. We didn't do any pre-processing except remove HTML tags.",
"### Supported Tasks and Leaderboards\n\nComment Generation; News analysis with comments; Other comment-based NLP tasks.",
"### Languages\n\nEnglish",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
ed469c08d41bc8f06be59d054c92f45fa8aba976 | # Dataset Card for "alarm_prediction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hamzagorgulu/alarm_prediction | [
"region:us"
] | 2023-01-04T06:36:58+00:00 | {"dataset_info": {"features": [{"name": "alarms", "dtype": "string"}, {"name": "sequence_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 590075.731, "num_examples": 1271}, {"name": "validation", "num_bytes": 65925.062, "num_examples": 142}], "download_size": 191168, "dataset_size": 656000.7930000001}} | 2023-01-04T09:56:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "alarm_prediction"
More Information needed | [
"# Dataset Card for \"alarm_prediction\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"alarm_prediction\"\n\nMore Information needed"
] |
2e8ed5d0c2dbceff3f65d2b0c018ca18f9d9783a | # Dataset Card for "psychiq2-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | derenrich/psychiq2-dataset | [
"region:us"
] | 2023-01-04T06:46:18+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "dtype": {"class_label": {"names": {"0": "P31-Q5", "1": "P31-Q16521", "2": "P31-Q4167410", "3": "P31-Q11424", "4": "P31-Q482994", "5": "P31-Q13406463", "6": "P31-Q532", "7": "P31-Q27020041", "8": "P31-Q486972", "9": "P31-Q22808320", "10": "P31-Q4830453", "11": "P31-Q101352", "12": "P31-Q7725634", "13": "P31-Q134556", "14": "P31-Q215380", "15": "P31-Q55488", "16": "P31-Q17343829", "17": "P31-Q5398426", "18": "P31-Q484170", "19": "P31-Q105543609", "20": "P31-Q4022", "21": "P31-Q43229", "22": "P31-Q18340514", "23": "P31-Q7889", "24": "P31-Q8502", "25": "P31-Q34442", "26": "P31-Q26895936", "27": "P31-Q3558970", "28": "P31-Q16510064", "29": "P31-Q14350", "30": "P31-Q476028", "31": "P31-Q56436498", "32": "P31-Q16970", "33": "P31-Q11173", "34": "P31-Q9826", "35": "P31-Q41176", "36": "P31-Q23038290", "37": "P31-Q11446", "38": "P31-Q46190676", "39": "P31-Q26887310", "40": "P31-Q1539532", "41": "P31-Q5084", "42": "P31-Q47461344", "43": "P31-Q7278", "44": "P31-Q176799", "45": "P31-Q3947", "46": "P31-Q3914", "47": "P31-Q7187", "48": "P31-Q1248784", "49": "P279-Q20747295", "50": "P31-Q3957", "51": "P31-Q40231", "52": "P31-Q1115575", "53": "P31-Q23397", "54": "P31-Q498162", "55": "P31-Q23442", "56": "P31-Q26894053", "57": "P31-Q928830", "58": "P31-Q15416", "59": "P31-Q123705", "60": "P31-Q3918", "61": "P31-Q1093829", "62": "P31-Q21191270", "63": "P31-Q3257686", "64": "P31-Q15243209", "65": "P31-Q47345468", "66": "P31-Q6881511", "67": "P31-Q178561", "68": "P31-Q3464665", "69": "P31-Q839954", "70": "P31-Q41298", "71": "P31-Q33506", "72": "P31-Q163740", "73": "P31-Q747074", "74": "P31-Q15632617", "75": "P31-Q11032", "76": "P31-Q3305213", "77": "P31-Q262166", "78": "P31-Q327333", "79": "P31-Q618779", "80": "P31-Q34770", "81": "P31-Q4504495", "82": "P31-Q891723", "83": "P31-Q169930", "84": "P31-Q46135307", "85": "P31-Q5633421", "86": "P31-Q7366", "87": "P31-Q98645843", "88": "P31-Q4164871", "89": "P31-Q783794", "90": "P31-Q15127012", "91": "P31-Q2334719", "92": "P31-Q41710", "93": "P31-Q74817647", "94": "P31-Q3231690", "95": "P31-Q22808404", "96": "P31-Q18536594", "97": "P31-Q398141", "98": "P31-Q506240", "99": "P31-Q515", "100": "P31-Q726", "101": "P31-Q16917", "102": "P31-Q188509", "103": "P31-Q2074737", "104": "P31-Q159334", "105": "P31-Q24862", "106": "P31-Q1288568", "107": "P31-Q736917", "108": "P31-Q35127", "109": "P31-Q1002697", "110": "P31-Q811979", "111": "P31-Q728937", "112": "P31-Q17524420", "113": "P31-Q12308941", "114": "P31-Q22808403", "115": "P31-Q67015883", "116": "P31-Q7397", "117": "P31-Q18127", "118": "P31-Q22698", "119": "P31-Q1076486", "120": "P31-Q1616075", "121": "P31-Q58483083", "122": "P31-Q473972", "123": "P31-Q112193867", "124": "P31-Q483110", "125": "P31-Q51031626", "126": "P31-Q26213387", "127": "P31-Q94993988", "128": "P31-Q27686", "129": "P279-Q11436", "130": "P31-Q15773317", "131": "P31-Q46970", "132": "P31-Q310890", "133": "P31-Q13406554", "134": "P31-Q38033430", "135": "P31-Q46831", "136": "P31-Q15221623", "137": "P31-Q19832486", "138": "P31-Q180684", "139": "P31-Q12323", "140": "P31-Q26896697", "141": "P31-Q847017", "142": "P31-Q4671277", "143": "P31-Q56061", "144": "P31-Q523", "145": "P31-Q192287", "146": "P31-Q23413", "147": "P31-Q152450", "148": "P31-Q273057", "149": "P31-Q571", "150": "P31-Q537127", "151": "P31-Q21198342", "152": "P31-Q341", "153": "P31-Q3184121", "154": "P31-Q1530705", "155": "P31-Q24354", "156": "P31-Q11315", "157": "P31-Q13393265", "158": "P31-Q3863", "159": "P31-Q860861", "160": "P31-Q1080794", "161": "P31-Q1057954", "162": "P31-Q751708", "163": "P31-Q79007", "164": "P31-Q1656682", "165": "P31-Q623109", "166": "P31-Q192611", "167": "P31-Q13027888", "168": "P31-Q1114461", "169": "P31-Q1194951", "170": "P31-Q11303", "171": "P31-Q4498974", "172": "P31-Q39614", "173": "P31-Q15275719", "174": "P31-Q19692072", "175": "P31-Q15261477", "176": "P31-Q1549591", "177": "P31-Q11879590", "178": "P31-Q11436", "179": "P31-Q188451", "180": "P31-Q35666", "181": "P31-Q12140", "182": "P31-Q15630849", "183": "P31-Q51049922", "184": "P31-Q737498", "185": "P31-Q39594", "186": "P31-Q3001412", "187": "P31-Q47521", "188": "P31-Q15623926", "189": "P31-Q1190554", "190": "P31-Q131681", "191": "P31-Q74047", "192": "P31-Q87167", "193": "P31-Q207694", "194": "P31-Q735428", "195": "P31-Q2001305", "196": "P31-Q70208", "197": "P31-Q1307276", "198": "P31-Q31855", "199": "P31-Q2085381", "200": "P31-Q559026", "201": "P31-Q1079023", "202": "P31-Q1244442", "203": "P31-Q15125752", "204": "P31-Q5773747", "205": "P31-Q2514025", "206": "P31-Q27020779", "207": "P31-Q65770283", "208": "P31-Q618123", "209": "P31-Q12136", "210": "P31-Q253019", "211": "P31-Q842402", "212": "P31-Q18608583", "213": "P31-Q112826905", "214": "P31-Q2175765", "215": "P31-Q355304", "216": "P31-Q708676", "217": "P31-Q12280", "218": "P31-Q1573906", "219": "P31-Q178790", "220": "P31-Q877358", "221": "P279-Q2095", "222": "P31-Q189004", "223": "P31-Q15056995", "224": "P31-Q12973014", "225": "P31-Q54050", "226": "P31-Q3024240", "227": "P31-Q50393057", "228": "P31-Q174736", "229": "P31-Q4671329", "230": "P31-Q868557", "231": "P31-Q355567", "232": "P31-Q32815", "233": "P31-Q39816", "234": "P31-Q23012917", "235": "P31-Q15238777", "236": "P31-Q5153359", "237": "P31-Q41253", "238": "P31-Q17517379", "239": "P31-Q104635718", "240": "P31-Q21869758", "241": "P31-Q46195901", "242": "P31-Q11707", "243": "P31-Q2485448", "244": "P31-Q39715", "245": "P31-Q1852859", "246": "P31-Q667509", "247": "P31-Q27971968", "248": "P31-Q1555508", "249": "P31-Q2065736", "250": "P31-Q4989906", "251": "P31-Q695850", "252": "P31-Q18558301", "253": "P31-Q18759100", "254": "P31-Q811704", "255": "P31-Q40357", "256": "P31-Q65943", "257": "P31-Q655686", "258": "P31-Q431289", "259": "P31-Q186117", "260": "P31-Q2198484", "261": "P31-Q10929058", "262": "P31-Q63952888", "263": "P31-Q15661340", "264": "P31-Q62447", "265": "P31-Q786820", "266": "P31-Q46169", "267": "P31-Q3146899", "268": "P31-Q42744322", "269": "P31-Q811430", "270": "P31-Q62049", "271": "P31-Q4224624", "272": "P31-Q820477", "273": "P31-Q34038", "274": "P31-Q18524218", "275": "P31-Q5185279", "276": "P31-Q42211429", "277": "P31-Q28640", "278": "P31-Q15642541", "279": "P31-Q2179958", "280": "P31-Q104146934", "281": "P31-Q1081138", "282": "P31-Q17350442", "283": "P31-Q11670533", "284": "P31-Q12859788", "285": "P31-Q82794", "286": "P31-Q1154710", "287": "P31-Q202444", "288": "P31-Q744913", "289": "P31-Q476068", "290": "P31-Q55818", "291": "P31-Q3658341", "292": "P31-Q12089225", "293": "P31-Q22988604", "294": "P31-Q577", "295": "P31-Q16024164", "296": "P31-Q9035798", "297": "P279-Q112193769", "298": "P31-Q23002039", "299": "P31-Q210167", "300": "P31-Q44613", "301": "P31-Q8054", "302": "P31-Q167346", "303": "P31-Q2990963", "304": "P31-Q62391930", "305": "P31-Q21672098", "306": "P31-Q15056993", "307": "P31-Q8436", "308": "P31-Q188055", "309": "P31-Q1952852", "310": "P31-Q2811", "311": "P31-Q17205621", "312": "P31-Q133056", "313": "P31-Q45400320", "314": "P31-Q1134686", "315": "P31-Q131569", "316": "P31-Q34763", "317": "P31-Q220505", "318": "P31-Q24764", "319": "P31-Q1343246", "320": "P31-Q23002054", "321": "P31-Q2088357", "322": "P31-Q29791211", "323": "P31-Q641226", "324": "P31-Q2023000", "325": "P31-Q174782", "326": "P31-Q17198545", "327": "P31-Q50846468", "328": "P31-Q428661", "329": "P31-Q7075", "330": "P31-Q1366722", "331": "P31-Q1259759", "332": "P31-Q5741069", "333": "P31-Q249556", "334": "P31-Q107357104", "335": "P31-Q17317604", "336": "P31-Q16560", "337": "P31-Q2385804", "338": "P31-Q902814", "339": "P31-Q106179098", "340": "P279-Q20650761", "341": "P31-Q64037785", "342": "P31-Q4663385", "343": "P31-Q679165", "344": "P31-Q105774620", "345": "P31-Q2989398", "346": "P31-Q15284", "347": "P31-Q18663566", "348": "P31-Q17198620", "349": "P31-Q11862829", "350": "P31-Q570116", "351": "P31-Q1289426", "352": "P31-Q2922711", "353": "P31-Q581714", "354": "P31-Q43109", "355": "P31-Q277759", "356": "P31-Q131436", "357": "P31-Q35823051", "358": "P31-Q2977", "359": "P31-Q10438042", "360": "P31-Q2247863", "361": "P31-Q15711870", "362": "P31-Q132241", "363": "P31-Q1363599", "364": "P31-Q167270", "365": "P279-Q112826975", "366": "P31-Q47154513", "367": "P31-Q929833", "368": "P31-Q3192808", "369": "P31-Q1762059", "370": "P31-Q56557504", "371": "P31-Q15911738", "372": "P31-Q16334295", "373": "P31-Q2122052", "374": "P31-Q773668", "375": "P31-Q13417114", "376": "P31-Q7930989", "377": "P31-Q9212979", "378": "P279-Q1420", "379": "P31-Q179700", "380": "P31-Q179049", "381": "P31-Q1785071", "382": "P31-Q22667", "383": "P31-Q7944", "384": "P31-Q2039348", "385": "P31-Q1589009", "386": "P31-Q64578911", "387": "P31-Q1110794", "388": "P31-Q2990946", "389": "P31-Q82799", "390": "P31-Q6270791", "391": "P31-Q28564", "392": "P31-Q12813115", "393": "P31-Q1802801", "394": "P31-Q25295", "395": "P31-Q17156793", "396": "P31-Q645883", "397": "P31-Q269770", "398": "P31-Q67206691", "399": "P31-Q17205774", "400": "P31-Q14645593", "401": "P31-Q644371", "402": "P31-Q15726209", "403": "P31-Q2590631", "404": "P31-Q56242063", "405": "P31-Q2151232", "406": "P31-Q18691601", "407": "P31-Q17318027", "408": "P31-Q15911314", "409": "P31-Q6979593", "410": "P31-Q22687", "411": "P31-Q42998", "412": "P279-Q8054", "413": "P31-Q986065", "414": "P31-Q210272", "415": "P31-Q132821", "416": "P31-Q1021645", "417": "P31-Q294414", "418": "P31-Q271669", "419": "P31-Q17299750", "420": "P31-Q1348589", "421": "P31-Q1445650", "422": "P31-Q6784672", "423": "P31-Q7058673", "424": "P31-Q35509", "425": "P31-Q29154515", "426": "P31-Q1065118", "427": "P31-Q2592651", "428": "P279-Q11173", "429": "P31-Q1663017", "430": "P31-Q17201685", "431": "P31-Q157031", "432": "P31-Q151885", "433": "P31-Q22746", "434": "P31-Q417841", "435": "P31-Q1402592", "436": "P31-Q26132862", "437": "P279-Q483373", "438": "P31-Q95074", "439": "P31-Q1131296", "440": "P31-Q245016", "441": "P31-Q14406742", "442": "P31-Q48204", "443": "P31-Q24397514", "444": "P31-Q1371849", "445": "P31-Q33384", "446": "P31-Q9143", "447": "P31-Q777120", "448": "P31-Q659103", "449": "P31-Q34627", "450": "P31-Q11353", "451": "P31-Q43501", "452": "P279-Q407479", "453": "P31-Q742421", "454": "P31-Q2992826", "455": "P31-Q23866334", "456": "P279-Q22645", "457": "P31-Q2679157", "458": "P31-Q10648343", "459": "P31-Q90834785", "460": "P31-Q3812392", "461": "P31-Q169534", "462": "P31-Q358", "463": "P31-Q40080", "464": "P31-Q106071004", "465": "P31-Q19723451", "466": "P31-Q56219051", "467": "P31-Q2087181", "468": "P31-Q28328984", "469": "P31-Q79913", "470": "P31-Q1195098", "471": "P31-Q1004", "472": "P279-Q11053", "473": "P31-Q19571328", "474": "P31-Q212198", "475": "P31-Q2618461", "476": "P31-Q7216840", "477": "P31-Q475061", "478": "P31-Q29168811", "479": "P31-Q875538", "480": "P31-Q191992", "481": "P31-Q751876", "482": "P31-Q207326", "483": "P31-Q1088552", "484": "P31-Q1077097", "485": "P31-Q24034552", "486": "P31-Q838795", "487": "P31-Q73364223", "488": "P31-Q2006279", "489": "P31-Q17376093", "490": "P31-Q879146", "491": "P31-Q15280243", "492": "P31-Q1500350", "493": "P31-Q47018478", "494": "P31-Q198", "495": "P31-Q15991303", "496": "P31-Q2367225", "497": "P31-Q2630741", "498": "P31-Q113681859", "499": "P31-Q25550691", "500": "P31-Q15773347", "501": "P31-Q3917681", "502": "P31-Q160742", "503": "P31-Q12284", "504": "P31-Q500834", "505": "P31-Q4421", "506": "P31-Q15944511", "507": "P31-Q4387609", "508": "P31-Q3565868", "509": "P31-Q740445", "510": "P31-Q131734", "511": "P31-Q17205735", "512": "P31-Q6617741", "513": "P31-Q2418495", "514": "P31-Q9651979", "515": "P31-Q2555896", "516": "P31-Q1529096", "517": "P31-Q378427", "518": "P31-Q52371", "519": "P31-Q494829", "520": "P31-Q44539", "521": "P31-Q194195", "522": "P31-Q193622", "523": "P31-Q3297186", "524": "P31-Q194408", "525": "P31-Q484652", "526": "P31-Q740752", "527": "P31-Q148837", "528": "P31-Q18340550", "529": "P31-Q3504085", "530": "P31-Q1366112", "531": "P31-Q21278897", "532": "P31-Q5393308", "533": "P31-Q2488", "534": "P31-Q1210334", "535": "P31-Q44782", "536": "P31-Q1149652", "537": "P31-Q6558431", "538": "P31-Q18142", "539": "P31-Q15092344", "540": "P31-Q11448906", "541": "P31-Q1639634", "542": "P279-Q13219666", "543": "P31-Q1497375", "544": "P31-Q108325", "545": "P31-Q38723", "546": "P31-Q589282", "547": "P31-Q15079663", "548": "P279-Q19842071", "549": "P31-Q15142894", "550": "P31-Q690840", "551": "P31-Q543654", "552": "P31-Q98433835", "553": "P31-Q131596", "554": "P31-Q16830604", "555": "P31-Q80096233", "556": "P31-Q14795564", "557": "P31-Q188913", "558": "P279-Q785745", "559": "P31-Q17051044", "560": "P31-Q2989400", "561": "P31-Q55491", "562": "P31-Q27787439", "563": "P31-Q685309", "564": "P31-Q18536800", "565": "P31-Q1279564", "566": "P31-Q732717", "567": "P31-Q229390", "568": "P31-Q1825472", "569": "P31-Q44559", "570": "P31-Q422211", "571": "P31-Q3677932", "572": "P31-Q185113", "573": "P31-Q130003", "574": "P31-Q21070568", "575": "P31-Q26884324", "576": "P31-Q29964144", "577": "P31-Q33146843", "578": "P31-Q161705", "579": "P31-Q55788864", "580": "P31-Q84491920", "581": "P31-Q1336920", "582": "P279-Q17517", "583": "P31-Q226730", "584": "P31-Q494230", "585": "P31-Q1788716", "586": "P31-Q431603", "587": "P31-Q2772772", "588": "P31-Q1569167", "589": "P31-Q120560", "590": "P31-Q61220733", "591": "P31-Q1664720", "592": "P31-Q12737077", "593": "P31-Q192350", "594": "P31-Q958314", "595": "P31-Q17166756", "596": "P31-Q1969448", "597": "P31-Q7210356", "598": "P31-Q1040689", "599": "P31-Q1497364", "600": "P31-Q46622", "601": "P31-Q1758856", "602": "P31-Q273120", "603": "P31-Q2755753", "604": "P31-Q1643932", "605": "P31-Q47481352", "606": "P31-Q19860854", "607": "P31-Q899409", "608": "P31-Q902104", "609": "P31-Q955824", "610": "P31-Q838948", "611": "P31-Q879050", "612": "P31-Q2785216", "613": "P31-Q19844914", "614": "P31-Q8719053", "615": "P31-Q2775236", "616": "P31-Q124757", "617": "P31-Q15078955", "618": "P31-Q20741022", "619": "P31-Q19953632", "620": "P31-Q71631512", "621": "P31-Q113813711", "622": "P31-Q149621", "623": "P31-Q820655", "624": "P31-Q9135", "625": "P31-Q562061", "626": "P31-Q613142", "627": "P31-Q83620", "628": "P31-Q166142", "629": "P31-Q50053", "630": "P31-Q202866", "631": "P31-Q112965645", "632": "P31-Q28111", "633": "P31-Q8142", "634": "P31-Q20074337", "635": "P31-Q858439", "636": "P31-Q18618819", "637": "P31-Q56242215", "638": "P31-Q4438121", "639": "P31-Q1261214", "640": "P31-Q1637706", "641": "P31-Q1059478", "642": "P31-Q852190", "643": "P31-Q561068", "644": "P31-Q3199915", "645": "P31-Q180958", "646": "P31-Q1631107", "647": "P31-Q84467700", "648": "P31-Q4886", "649": "P31-Q494721", "650": "P31-Q83405", "651": "P31-Q11483816", "652": "P31-Q1667921", "653": "P31-Q25379", "654": "P31-Q20202352", "655": "P31-Q1143635", "656": "P31-Q28140340", "657": "P31-Q1137809", "658": "P279-Q3270632", "659": "P31-Q204577", "660": "P31-Q12909644", "661": "P31-Q57733494", "662": "P31-Q1478437", "663": "P279-Q1999103", "664": "P31-Q2178147", "665": "P31-Q2389789", "666": "P31-Q575759", "667": "P31-Q15217609", "668": "P31-Q4677783", "669": "P31-Q13402009", "670": "P31-Q3331189", "671": "P31-Q4287745", "672": "P31-Q5421693", "673": "P279-Q407355", "674": "P31-Q1378975", "675": "P31-Q526877", "676": "P31-Q2135465", "677": "P31-Q1254933", "678": "P31-Q49371", "679": "P31-Q56019", "680": "P31-Q35112127", "681": "P31-Q5155053", "682": "P31-Q133311", "683": "P31-Q2235308", "684": "P31-Q170584", "685": "P31-Q18019452", "686": "P31-Q184188", "687": "P31-Q3700011", "688": "P31-Q18534542", "689": "P31-Q6508670", "690": "P31-Q14073567", "691": "P31-Q27676428", "692": "P31-Q1078765", "693": "P31-Q20019082", "694": "P31-Q38058796", "695": "P31-Q164950", "696": "P31-Q493522", "697": "P31-Q178885", "698": "P31-Q233324", "699": "P31-Q37901", "700": "P31-Q40434727", "701": "P31-Q1802963", "702": "P31-Q3409032", "703": "P279-Q216916", "704": "P31-Q162602", "705": "P31-Q15219655", "706": "P31-Q24856", "707": "P31-Q235557", "708": "P31-Q17202187", "709": "P31-Q2338524", "710": "P31-Q149918", "711": "P31-Q1760610", "712": "P31-Q20643955", "713": "P31-Q107655869", "714": "P31-Q1007870", "715": "P31-Q131186", "716": "P31-Q3241045", "717": "P31-Q76514543", "718": "P31-Q317557", "719": "P31-Q27995042", "720": "P31-Q45776", "721": "P31-Q1320047", "722": "P31-Q11229656", "723": "P31-Q269528", "724": "P279-Q625151", "725": "P31-Q3887", "726": "P31-Q16323605", "727": "P31-Q223393", "728": "P31-Q196600", "729": "P31-Q1410668", "730": "P31-Q1194970", "731": "P31-Q658255", "732": "P31-Q11410", "733": "P279-Q28816538", "734": "P31-Q39804", "735": "P31-Q45762", "736": "P31-Q21191019", "737": "P31-Q59199015", "738": "P31-Q18867465", "739": "P31-Q141683", "740": "P31-Q24634210", "741": "P31-Q155271", "742": "P31-Q15836568", "743": "P31-Q21246076", "744": "P31-Q29154550", "745": "P31-Q829080", "746": "P31-Q169950", "747": "P31-Q79602", "748": "P31-Q20857085", "749": "P31-Q8366", "750": "P31-Q22713629", "751": "P31-Q3469910", "752": "P31-Q15079786", "753": "P31-Q1261499", "754": "P31-Q105999", "755": "P31-Q14752696", "756": "P31-Q9842", "757": "P31-Q15265344", "758": "P31-Q35054", "759": "P31-Q686822", "760": "P31-Q22674925", "761": "P31-Q3685463", "762": "P31-Q5003624", "763": "P31-Q1620908", "764": "P31-Q8776398", "765": "P31-Q153562", "766": "P31-Q15893266", "767": "P31-Q15620943", "768": "P31-Q39367", "769": "P31-Q12819564", "770": "P31-Q83790536", "771": "P31-Q98775491", "772": "P31-Q55102916", "773": "P31-Q47443726", "774": "P31-Q11167066", "775": "P31-Q75054287", "776": "P31-Q1068842", "777": "P31-Q620471", "778": "P31-Q746549", "779": "P31-Q4117139", "780": "P31-Q640506", "781": "P31-Q158218", "782": "P31-Q15221215", "783": "P31-Q17431399", "784": "P31-Q1484611", "785": "P31-Q2983893", "786": "P31-Q12518", "787": "P31-Q2154459", "788": "P31-Q166118", "789": "P31-Q3117863", "790": "P31-Q968159", "791": "P31-Q11455398", "792": "P31-Q317623", "793": "P31-Q2742167", "794": "P31-Q24869", "795": "P31-Q32880", "796": "P31-Q192078", "797": "P31-Q1756006", "798": "P31-Q55237813", "799": "P31-Q6243", "800": "P31-Q66715753", "801": "P31-Q109607", "802": "P31-Q2996394", "803": "P31-Q167170", "804": "P31-Q2089242", "805": "P31-Q11204", "806": "P31-Q67454740", "807": "P31-Q211748", "808": "P31-Q26214208", "809": "P31-Q2750108", "810": "P31-Q507619", "811": "P31-Q1499623", "812": "P279-Q2990946", "813": "P31-Q15229207", "814": "P31-Q1441305", "815": "P31-Q1060829", "816": "P31-Q7864918", "817": "P31-Q190903", "818": "P31-Q124734", "819": "P31-Q1267632", "820": "P31-Q726870", "821": "P31-Q917146", "822": "P31-Q23039057", "823": "P31-Q2695280", "824": "P31-Q2635894", "825": "P31-Q465299", "826": "P31-Q1799072", "827": "P31-Q1048525", "828": "P31-Q55983715", "829": "P31-Q2022036", "830": "P31-Q4271324", "831": "P31-Q49773", "832": "P31-Q3327874", "833": "P31-Q682943", "834": "P31-Q2882257", "835": "P31-Q3186692", "836": "P31-Q956318", "837": "P31-Q8072", "838": "P31-Q1195942", "839": "P31-Q142714", "840": "P31-Q15640053", "841": "P31-Q105390172", "842": "P31-Q156362", "843": "P31-Q15057021", "844": "P31-Q10517054", "845": "P31-Q64138263", "846": "P279-Q1002954", "847": "P31-Q628179", "848": "P31-Q112144412", "849": "P31-Q1144661", "850": "P31-Q490329", "851": "P31-Q131647", "852": "P31-Q1244922", "853": "P31-Q188860", "854": "P31-Q15141321", "855": "P31-Q1754946", "856": "P31-Q2679045", "857": "P31-Q625298", "858": "P279-Q34379", "859": "P279-Q107715", "860": "P31-Q1400264", "861": "P31-Q189118", "862": "P31-Q1793804", "863": "P279-Q746549", "864": "P31-Q2614970", "865": "P31-Q49776", "866": "P31-Q220659", "867": "P31-Q162875", "868": "P31-Q19930933", "869": "P31-Q137535", "870": "P31-Q13220204", "871": "P31-Q15303838", "872": "P31-Q35456", "873": "P31-Q104649845", "874": "P31-Q3191695", "875": "P31-Q26685543", "876": "P31-Q23828039", "877": "P31-Q2223653", "878": "P31-Q30129411", "879": "P31-Q383092", "880": "P31-Q15720476", "881": "P31-Q18691599", "882": "P31-Q3497167", "883": "P31-Q17366755", "884": "P31-Q15324", "885": "P31-Q15221242", "886": "P31-Q641066", "887": "P31-Q16887380", "888": "P31-Q381885", "889": "P31-Q38720", "890": "P31-Q158438", "891": "P31-Q829026", "892": "P31-Q55659167", "893": "P31-Q496825", "894": "P31-Q7372078", "895": "P31-Q3950", "896": "P31-Q1785733", "897": "P31-Q18564289", "898": "P31-Q17339814", "899": "P31-Q1311958", "900": "P31-Q46865913", "901": "P31-Q107679", "902": "P31-Q18325436", "903": "P31-Q23847174", "904": "P31-Q23691", "905": "P31-Q3240003", "906": "P31-Q18761864", "907": "P31-Q1595639", "908": "P31-Q1147395", "909": "P31-Q46351685", "910": "P31-Q1070990", "911": "P31-Q17715832", "912": "P31-Q16735822", "913": "P31-Q1047113", "914": "P31-Q13411064", "915": "P31-Q4936952", "916": "P31-Q23983664", "917": "P31-Q936518", "918": "P31-Q850270", "919": "P31-Q16466010", "920": "P31-Q12292478", "921": "P31-Q14659", "922": "P31-Q3623867", "923": "P31-Q1768043", "924": "P31-Q1229765", "925": "P31-Q7315155", "926": "P31-Q27889498", "927": "P31-Q2221906", "928": "P31-Q57831", "929": "P31-Q17374546", "930": "P31-Q52193405", "931": "P31-Q107359024", "932": "P31-Q3046146", "933": "P31-Q29414133", "934": "P31-Q67101749", "935": "P31-Q7373622", "936": "P31-Q483242", "937": "P31-Q4671286", "938": "P31-Q270791", "939": "P31-Q2292572", "940": "P31-Q1160573", "941": "P31-Q20742825", "942": "P31-Q39911", "943": "P31-Q21199", "944": "P31-Q7694920", "945": "P31-Q10729872", "946": "P31-Q150784", "947": "P31-Q1530022", "948": "P31-Q2154519", "949": "P31-Q211302", "950": "P31-Q2095", "951": "P31-Q1792372", "952": "P31-Q423208", "953": "P31-Q98374631", "954": "P31-Q738377", "955": "P31-Q1428357", "956": "P31-Q7897276", "957": "P31-Q63998451", "958": "P31-Q1959314", "959": "P31-Q2738074", "960": "P31-Q1138671", "961": "P31-Q11514315", "962": "P31-Q37002670", "963": "P31-Q856234", "964": "P31-Q170013", "965": "P31-Q55135234", "966": "P31-Q817477", "967": "P31-Q620615", "968": "P31-Q64801076", "969": "P31-Q11691", "970": "P31-Q1321960", "971": "P31-Q2493450", "972": "P31-Q47164206", "973": "P31-Q2354973", "974": "P31-Q2116321", "975": "P31-Q625994", "976": "P31-Q2319498", "977": "P31-Q20650761", "978": "P31-Q100775361", "979": "P31-Q74614691", "980": "P31-Q31629", "981": "P31-Q10689397", "982": "P31-Q7755", "983": "P31-Q11422536", "984": "P31-Q269949", "985": "P31-Q3329412", "986": "P31-Q21170330", "987": "P31-Q32099", "988": "P31-Q33837", "989": "P31-Q8068", "990": "P31-Q105731", "991": "P31-Q94670589", "992": "P31-Q3055118", "993": "P31-Q5058355", "994": "P31-Q1349255", "995": "P31-Q752783", "996": "P31-Q159719", "997": "P31-Q699", "998": "P31-Q63981919", "999": "P31-Q26271642", "1000": "unknown-unknown"}}}}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 2675957477, "num_examples": 5660845}, {"name": "test", "num_bytes": 297185196, "num_examples": 628983}], "download_size": 987789145, "dataset_size": 2973142673}} | 2023-01-04T06:59:48+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "psychiq2-dataset"
More Information needed | [
"# Dataset Card for \"psychiq2-dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"psychiq2-dataset\"\n\nMore Information needed"
] |
04949024777c371d5cc6e85976d9287f94ff71a2 |
# Genshin Datasets for SVS/SVC/TTS
## 仓库地址
| 仓库 | 传送门 |
| :------------: | :-----------------------------------------------: |
| DiffSinger | [点此传送](https://github.com/openvpi/DiffSinger) |
| Fish Diffusion | [点此传送](https://github.com/fishaudio/fish-diffusion) |
| RVC | [点此传送](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI) |
| DDSP-SVC | [点此传送](https://github.com/yxlllc/DDSP-SVC) |
| Vits | [点此传送](https://github.com/CjangCjengh/vits) |
| 44.1KHz声码器 | [点此传送](https://openvpi.github.io/vocoders) |
| 原神语音数据集(溯洄,目前只更新到了3.4) | [点此传送](https://github.com/w4123/GenshinVoice) |
## 介绍
该数据集为训练原神 SVS/SVC/TTS 的数据集,目前提供全数据集 (Full) 和整理好的 (Sorted) 数据集,全数据集由 [溯洄](https://github.com/w4123) 的3.4版本和我自己整理的合并而成,需要自行根据项目进行预处理以及响度匹配等等。后续也会提供其它语言的语音。**该数据集仅可用于二次创作和训练模型,不得进行任何商业用途!该数据集所用的语音数据的所有权均归 [米哈游](https://www.mihoyo.com/) 所有!**
## 下载地址(不定时更新)
| 版本 | 是否已整理 | 语言 | 下载地址 | 备注 |
| :----------------------------------------------------------: | :------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
| 3.5 | 已按照角色整理 | Chinese | [点我下载](https://huggingface.co/datasets/Erythrocyte/Genshin_Datasets/resolve/main/Sorted/Chinese/3.5_Sorted.zip) | 包含:全角色+部分NPC+标注 |
| 3.5 | 未整理 | Chinese | 上传中 | 完整的数据集,需要自行按需整理,无标注 |
## 整理脚本
如果想要从完整数据集获取自己想要的角色,可以通过如下脚本整理:
整理脚本:https://huggingface.co/datasets/Erythrocyte/Genshin_Datasets/blob/main/Scripts/genshin_label.py | Erythrocyte/Genshin_Datasets | [
"Genshin",
"Genshin Impact",
"Voice Data",
"Voice Dataset",
"DiffSinger",
"Diff-SVC",
"DiffSVC",
"Vits",
"DDSP-SVC",
"region:us"
] | 2023-01-04T06:59:36+00:00 | {"tags": ["Genshin", "Genshin Impact", "Voice Data", "Voice Dataset", "DiffSinger", "Diff-SVC", "DiffSVC", "Vits", "DDSP-SVC"]} | 2023-05-02T05:56:26+00:00 | [] | [] | TAGS
#Genshin #Genshin Impact #Voice Data #Voice Dataset #DiffSinger #Diff-SVC #DiffSVC #Vits #DDSP-SVC #region-us
| Genshin Datasets for SVS/SVC/TTS
================================
仓库地址
----
介绍
--
该数据集为训练原神 SVS/SVC/TTS 的数据集,目前提供全数据集 (Full) 和整理好的 (Sorted) 数据集,全数据集由 溯洄 的3.4版本和我自己整理的合并而成,需要自行根据项目进行预处理以及响度匹配等等。后续也会提供其它语言的语音。该数据集仅可用于二次创作和训练模型,不得进行任何商业用途!该数据集所用的语音数据的所有权均归 米哈游 所有!
下载地址(不定时更新)
-----------
整理脚本
----
如果想要从完整数据集获取自己想要的角色,可以通过如下脚本整理:
整理脚本:URL
| [] | [
"TAGS\n#Genshin #Genshin Impact #Voice Data #Voice Dataset #DiffSinger #Diff-SVC #DiffSVC #Vits #DDSP-SVC #region-us \n"
] |
7d9772484437c411095674310b5c297603a760f2 | # Dataset Card for "beautiful_interesting_spectacular_photo_model_30000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_model_30000 | [
"region:us"
] | 2023-01-04T07:43:16+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 216048045.0, "num_examples": 314}], "download_size": 216051172, "dataset_size": 216048045.0}} | 2023-01-04T07:43:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_model_30000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_model_30000\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_model_30000\"\n\nMore Information needed"
] |
84ab804cfcae53048713ce350017e3dfe6225fde | # Dataset Card for "beautiful_interesting_spectacular_photo_fantasy_30000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_fantasy_30000 | [
"region:us"
] | 2023-01-04T07:50:05+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 268414188.0, "num_examples": 317}], "download_size": 268419805, "dataset_size": 268414188.0}} | 2023-01-04T07:50:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_fantasy_30000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_fantasy_30000\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_fantasy_30000\"\n\nMore Information needed"
] |
3e48b7963475b55341dc09c108b6a7684c3d6d1f | # Dataset Card for "beautiful_interesting_spectacular_photo_dark_fantasy_30000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_dark_fantasy_30000 | [
"region:us"
] | 2023-01-04T07:57:37+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 533716832.0, "num_examples": 718}], "download_size": 533724773, "dataset_size": 533716832.0}} | 2023-01-04T07:58:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_dark_fantasy_30000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_dark_fantasy_30000\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_dark_fantasy_30000\"\n\nMore Information needed"
] |
33354022c52971b0f34e7367b78fd6a37d80d66b | # Dataset Card for "alarm_prediction2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hamzagorgulu/alarm_prediction2 | [
"region:us"
] | 2023-01-04T07:59:55+00:00 | {"dataset_info": {"features": [{"name": "alarms", "dtype": "string"}, {"name": "sequence_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5160488.637481801, "num_examples": 10905}, {"name": "validation", "num_bytes": 573545.3671369045, "num_examples": 1212}], "download_size": 1179619, "dataset_size": 5734034.004618706}} | 2023-01-04T08:00:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "alarm_prediction2"
More Information needed | [
"# Dataset Card for \"alarm_prediction2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"alarm_prediction2\"\n\nMore Information needed"
] |
e6a8589f70429398fdf99baab997f7a7a46e4b72 | # Dataset Card for "beautiful_interesting_spectacular_photo_anime_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_anime_25000 | [
"region:us"
] | 2023-01-04T08:12:15+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 773920358.0, "num_examples": 956}], "download_size": 773924888, "dataset_size": 773920358.0}} | 2023-01-04T08:13:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_anime_25000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_anime_25000\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_anime_25000\"\n\nMore Information needed"
] |
3ee237788da7c69c53188e34f778a7b61b479af7 | # Dataset Card for "online-sweater"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | chiHang/online-sweater | [
"region:us"
] | 2023-01-04T08:18:14+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1126581.0, "num_examples": 10}], "download_size": 0, "dataset_size": 1126581.0}} | 2023-01-12T07:56:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "online-sweater"
More Information needed | [
"# Dataset Card for \"online-sweater\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"online-sweater\"\n\nMore Information needed"
] |
773520b8eea5f9f29ba65740853510fcd6ad1b0c | # Dataset Card for "beautiful_interesting_spectacular_photo_futuristic_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_futuristic_25000 | [
"region:us"
] | 2023-01-04T08:22:17+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 406730039.0, "num_examples": 596}], "download_size": 406731237, "dataset_size": 406730039.0}} | 2023-01-04T08:22:52+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_futuristic_25000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_futuristic_25000\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_futuristic_25000\"\n\nMore Information needed"
] |
10544092a3d76e2eb2bad55e4e5ef40e165a8f71 | # Dataset Card for "beautiful_interesting_spectacular_photo_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_25000 | [
"region:us"
] | 2023-01-04T08:31:50+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 94714209.0, "num_examples": 111}], "download_size": 94717904, "dataset_size": 94714209.0}} | 2023-01-04T08:32:05+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_25000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_25000\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_25000\"\n\nMore Information needed"
] |
37b30417fc61d2017d88b07bb5c9de096182d1b2 | # Dataset Card for "beautiful_interesting_spectacular_photo_HD_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_HD_25000 | [
"region:us"
] | 2023-01-04T08:36:12+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 73481451.0, "num_examples": 94}], "download_size": 73485488, "dataset_size": 73481451.0}} | 2023-01-04T08:36:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_HD_25000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_HD_25000\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_HD_25000\"\n\nMore Information needed"
] |
c2fc4c386a6eb488561e194fd9692eac21fe97ae | # Dataset Card for "beautiful_interesting_spectacular_photo_medieval_25000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_medieval_25000 | [
"region:us"
] | 2023-01-04T08:46:43+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 195631184.0, "num_examples": 198}], "download_size": 195563226, "dataset_size": 195631184.0}} | 2023-01-04T08:47:05+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_medieval_25000"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_medieval_25000\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_medieval_25000\"\n\nMore Information needed"
] |
fdc848ab0183208ea7808206c91c724414d0a071 |
# Dataset Card for 🥤SODA
## Dataset Description
- **Repository:** [Code](https://github.com/skywalker023/sodaverse)
- **Paper:** [SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization](https://arxiv.org/abs/2212.10465)
- **Point of Contact:** [Hyunwoo Kim](mailto:[email protected])
## Dataset Summary
🥤SODA is the first publicly available, million-scale, high-quality dialogue dataset covering a wide range of social interactions. Dialogues are distilled from a PLM (InstructGPT; Ouyang et al., 2022) by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets – e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). Also, since social commonsense knowledge encompasses emotional reactions (i.e., the xReact `relation`), SODA includes 385K conversations labeled with 1.7K unique emotions along with information about the experiencer and the cause – i.e., `PersonX` and the `head` event in the symbolic commonsense knowledge triple.
## Languages
English
## Dataset Structure
field | type | description
--- | --- | ---
`head` | str | the head event in the symbolic commonsense knowledge triple
`relation` | str | the relationship between `head` and `tail` events
`tail` | str | the tail event in the symbolic commonsense knowledge triple
`literal` | str | the symbolic commonsense knowledge in sentence-form
`narrative` | str | narrative based on the `literal`
`dialogue` | list of str | dialogue grounded in the `narrative`
`speakers` | list of str | the speakers for each turn in the `dialogue`
`PersonX` | str | the assigned name for PersonX in the commonsense knowledge triple
`PersonY` | str\|null | the assigned name for PersonY in the commonsense knowledge triple
`PersonZ` | str\|null | the assigned name for PersonZ in the commonsense knowledge triple
`original_index` | int | the original index from Atomic10x
`split` | str | the split information: {train, valid, test}
`head_answer` | str | the answer for whether the `head` is included in the `narrative`: {Yes, Unknown}
`pmi_head_answer` | str | the answer for whether the `head` is included in the `narrative` with point-wise mutual information applied: {Yes, No, Unknown}
`relation_tail_answer` | str | the answer for whether the `relation`-`tail` is included in the `dialogue`: {Yes, No, Unknown}
`pmi_relation_tail_answer` | str | the answer for whether the `relation`-`tail` is included in the `dialogue` with point-wise mutual information applied: {Yes, No, Unknown}
## Dataset Creation
To create 🥤SODA, we distill dialogues from InstructGPT by contextualizing social commonsense knowledge – i.e., adding context information in multiple steps: (1) Retrieve social commonsense from the symbolic commonsense knowledge graph, (2) convert it into sentence form, (3) generate a narrative from the sentence, (4) infer the speakers from the narrative, and finally (5) derive contentful conversation grounded in the narrative and speakers. Anchoring the PLM in commonsense knowledge for deriving conversations offers two key advantages: (1) minimizing nonsensical conversations and (2) maximizing diversity. For more details, please refer to our [paper](https://arxiv.org/abs/2212.10465).
### Further Details, Social Impacts, and Limitations
Please refer to our [paper](https://arxiv.org/abs/2212.10465).
## Trained Model
Using 🥤SODA, we train 🧑🏻🚀COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. COSMO-3B is available [here](https://huggingface.co/allenai/cosmo-xl)!
## Additional Information
For a brief summary of our paper, please see this [tweet](https://twitter.com/hyunw__kim/status/1605400305126248448).
### Citation
Please cite our work if you find the resources in this repository useful:
```
@article{kim2022soda,
title={SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization},
author={Hyunwoo Kim and Jack Hessel and Liwei Jiang and Peter West and Ximing Lu and Youngjae Yu and Pei Zhou and Ronan Le Bras and Malihe Alikhani and Gunhee Kim and Maarten Sap and Yejin Choi},
journal={ArXiv},
year={2022},
volume={abs/2212.10465}
}
``` | allenai/soda | [
"task_categories:conversational",
"task_ids:dialogue-generation",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"source_datasets:extended|Atomic10x",
"language:en",
"license:cc-by-4.0",
"dialogue",
"narrative",
"commonsense",
"arxiv:2212.10465",
"region:us"
] | 2023-01-04T08:51:53+00:00 | {"language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original", "extended|Atomic10x"], "task_categories": ["conversational"], "task_ids": ["dialogue-generation"], "pretty_name": "SODA", "annotation_creators": ["machine-generated"], "splits": [{"name": "train", "num_examples": 1191582}, {"name": "valid", "num_examples": 146346}, {"name": "test", "num_examples": 148968}], "dataset_size": 1486896, "tags": ["dialogue", "narrative", "commonsense"]} | 2023-01-04T09:24:32+00:00 | [
"2212.10465"
] | [
"en"
] | TAGS
#task_categories-conversational #task_ids-dialogue-generation #language_creators-machine-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|Atomic10x #language-English #license-cc-by-4.0 #dialogue #narrative #commonsense #arxiv-2212.10465 #region-us
| Dataset Card for SODA
=====================
Dataset Description
-------------------
* Repository: Code
* Paper: SODA: Million-scale Dialogue Distillation with Social Commonsense Contextualization
* Point of Contact: Hyunwoo Kim
Dataset Summary
---------------
SODA is the first publicly available, million-scale, high-quality dialogue dataset covering a wide range of social interactions. Dialogues are distilled from a PLM (InstructGPT; Ouyang et al., 2022) by contextualizing social commonsense knowledge from a knowledge graph (Atomic10x; West et al., 2022). Human evaluation shows that dialogues in SODA are more consistent, specific, and (surprisingly) natural than prior human-authored datasets – e.g., DailyDialog (Li et al., 2017), BlendedSkillTalk (Smith et al., 2020). Also, since social commonsense knowledge encompasses emotional reactions (i.e., the xReact 'relation'), SODA includes 385K conversations labeled with 1.7K unique emotions along with information about the experiencer and the cause – i.e., 'PersonX' and the 'head' event in the symbolic commonsense knowledge triple.
Languages
---------
English
Dataset Structure
-----------------
field: 'head', type: str, description: the head event in the symbolic commonsense knowledge triple
field: 'relation', type: str, description: the relationship between 'head' and 'tail' events
field: 'tail', type: str, description: the tail event in the symbolic commonsense knowledge triple
field: 'literal', type: str, description: the symbolic commonsense knowledge in sentence-form
field: 'narrative', type: str, description: narrative based on the 'literal'
field: 'dialogue', type: list of str, description: dialogue grounded in the 'narrative'
field: 'speakers', type: list of str, description: the speakers for each turn in the 'dialogue'
field: 'PersonX', type: str, description: the assigned name for PersonX in the commonsense knowledge triple
field: 'PersonY', type: str|null, description: the assigned name for PersonY in the commonsense knowledge triple
field: 'PersonZ', type: str|null, description: the assigned name for PersonZ in the commonsense knowledge triple
field: 'original\_index', type: int, description: the original index from Atomic10x
field: 'split', type: str, description: the split information: {train, valid, test}
field: 'head\_answer', type: str, description: the answer for whether the 'head' is included in the 'narrative': {Yes, Unknown}
field: 'pmi\_head\_answer', type: str, description: the answer for whether the 'head' is included in the 'narrative' with point-wise mutual information applied: {Yes, No, Unknown}
field: 'relation\_tail\_answer', type: str, description: the answer for whether the 'relation'-'tail' is included in the 'dialogue': {Yes, No, Unknown}
field: 'pmi\_relation\_tail\_answer', type: str, description: the answer for whether the 'relation'-'tail' is included in the 'dialogue' with point-wise mutual information applied: {Yes, No, Unknown}
Dataset Creation
----------------
To create SODA, we distill dialogues from InstructGPT by contextualizing social commonsense knowledge – i.e., adding context information in multiple steps: (1) Retrieve social commonsense from the symbolic commonsense knowledge graph, (2) convert it into sentence form, (3) generate a narrative from the sentence, (4) infer the speakers from the narrative, and finally (5) derive contentful conversation grounded in the narrative and speakers. Anchoring the PLM in commonsense knowledge for deriving conversations offers two key advantages: (1) minimizing nonsensical conversations and (2) maximizing diversity. For more details, please refer to our paper.
### Further Details, Social Impacts, and Limitations
Please refer to our paper.
Trained Model
-------------
Using SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. COSMO-3B is available here!
Additional Information
----------------------
For a brief summary of our paper, please see this tweet.
Please cite our work if you find the resources in this repository useful:
| [
"### Further Details, Social Impacts, and Limitations\n\n\nPlease refer to our paper.\n\n\nTrained Model\n-------------\n\n\nUsing SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. COSMO-3B is available here!\n\n\nAdditional Information\n----------------------\n\n\nFor a brief summary of our paper, please see this tweet.\n\n\nPlease cite our work if you find the resources in this repository useful:"
] | [
"TAGS\n#task_categories-conversational #task_ids-dialogue-generation #language_creators-machine-generated #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #source_datasets-extended|Atomic10x #language-English #license-cc-by-4.0 #dialogue #narrative #commonsense #arxiv-2212.10465 #region-us \n",
"### Further Details, Social Impacts, and Limitations\n\n\nPlease refer to our paper.\n\n\nTrained Model\n-------------\n\n\nUsing SODA, we train COSMO: a generalizable conversation agent outperforming previous best-performing agents on both in- and out-of-domain datasets. COSMO-3B is available here!\n\n\nAdditional Information\n----------------------\n\n\nFor a brief summary of our paper, please see this tweet.\n\n\nPlease cite our work if you find the resources in this repository useful:"
] |
688c67b1fef4d625fd3e928ca5638f1d173b4fb5 | # Dataset Card for "alarm_prediction3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hamzagorgulu/alarm_prediction3 | [
"region:us"
] | 2023-01-04T09:57:11+00:00 | {"dataset_info": {"features": [{"name": "alarms", "dtype": "string"}, {"name": "sequence_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 590075.731, "num_examples": 1271}, {"name": "validation", "num_bytes": 65925.062, "num_examples": 142}], "download_size": 191168, "dataset_size": 656000.7930000001}} | 2023-01-04T10:17:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "alarm_prediction3"
More Information needed | [
"# Dataset Card for \"alarm_prediction3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"alarm_prediction3\"\n\nMore Information needed"
] |
d1ac575dc099ce61986efd101ed34b25455bd556 |
## starcraft-remastered-melee-maps
This is a dataset containing 1,815 Starcraft:Remastered melee maps, categorized into tilesets.
The dataset is used to train this model: https://huggingface.co/wdcqc/starcraft-platform-terrain-32x32
The dataset is manually downloaded from Battle.net, bounding.net (scmscx.com) and broodwarmaps.com over a long period of time.
To use this dataset, extract the `staredit\\scenario.chk` files from the map files using StormLib, then refer to [Scenario.chk Format](http://www.staredit.net/wiki/index.php/Scenario.chk) to get data like text, terrain or resource placement from the map.
Alternatively download the dataset and put it in `<My Documents>\StarCraft\Maps`. You can play with your friends. | wdcqc/starcraft-remastered-melee-maps | [
"task_categories:feature-extraction",
"task_categories:text-to-image",
"task_categories:image-to-image",
"task_categories:reinforcement-learning",
"task_ids:task-planning",
"size_categories:1K<n<10K",
"language:en",
"language:ko",
"license:unknown",
"starcraft",
"broodwar",
"melee",
"maps",
"region:us"
] | 2023-01-04T10:38:40+00:00 | {"language": ["en", "ko"], "license": "unknown", "size_categories": "1K<n<10K", "task_categories": ["feature-extraction", "text-to-image", "image-to-image", "reinforcement-learning"], "task_ids": ["task-planning"], "pretty_name": "Starcraft Remastered Melee Maps", "tags": ["starcraft", "broodwar", "melee", "maps"], "splits": [{"name": "ashworld", "num_bytes": "12,598,840", "num_examples": 135}, {"name": "badlands", "num_bytes": "21,067,712", "num_examples": 213}, {"name": "desert", "num_bytes": "19,505,010", "num_examples": 185}, {"name": "ice", "num_bytes": "19,070,217", "num_examples": 179}, {"name": "install", "num_bytes": "28,135", "num_examples": 1}, {"name": "jungle", "num_bytes": "62,374,211", "num_examples": 563}, {"name": "platform", "num_bytes": "23,324,208", "num_examples": 265}, {"name": "twilight", "num_bytes": "28,311,253", "num_examples": 274}]} | 2023-01-06T22:38:36+00:00 | [] | [
"en",
"ko"
] | TAGS
#task_categories-feature-extraction #task_categories-text-to-image #task_categories-image-to-image #task_categories-reinforcement-learning #task_ids-task-planning #size_categories-1K<n<10K #language-English #language-Korean #license-unknown #starcraft #broodwar #melee #maps #region-us
|
## starcraft-remastered-melee-maps
This is a dataset containing 1,815 Starcraft:Remastered melee maps, categorized into tilesets.
The dataset is used to train this model: URL
The dataset is manually downloaded from URL, URL (URL) and URL over a long period of time.
To use this dataset, extract the 'staredit\\URL' files from the map files using StormLib, then refer to URL Format to get data like text, terrain or resource placement from the map.
Alternatively download the dataset and put it in '<My Documents>\StarCraft\Maps'. You can play with your friends. | [
"## starcraft-remastered-melee-maps\n\nThis is a dataset containing 1,815 Starcraft:Remastered melee maps, categorized into tilesets.\n\nThe dataset is used to train this model: URL\n\nThe dataset is manually downloaded from URL, URL (URL) and URL over a long period of time.\n\nTo use this dataset, extract the 'staredit\\\\URL' files from the map files using StormLib, then refer to URL Format to get data like text, terrain or resource placement from the map.\n\nAlternatively download the dataset and put it in '<My Documents>\\StarCraft\\Maps'. You can play with your friends."
] | [
"TAGS\n#task_categories-feature-extraction #task_categories-text-to-image #task_categories-image-to-image #task_categories-reinforcement-learning #task_ids-task-planning #size_categories-1K<n<10K #language-English #language-Korean #license-unknown #starcraft #broodwar #melee #maps #region-us \n",
"## starcraft-remastered-melee-maps\n\nThis is a dataset containing 1,815 Starcraft:Remastered melee maps, categorized into tilesets.\n\nThe dataset is used to train this model: URL\n\nThe dataset is manually downloaded from URL, URL (URL) and URL over a long period of time.\n\nTo use this dataset, extract the 'staredit\\\\URL' files from the map files using StormLib, then refer to URL Format to get data like text, terrain or resource placement from the map.\n\nAlternatively download the dataset and put it in '<My Documents>\\StarCraft\\Maps'. You can play with your friends."
] |
fb8c3263adf9284583743bd955306187a2d77757 | # Dataset Card for "beautiful_data_with_generated_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_data_with_generated_captions | [
"region:us"
] | 2023-01-04T11:01:28+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}, {"name": "generated_caption", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 256755027.0, "num_examples": 331}, {"name": "train", "num_bytes": 2306158521.402, "num_examples": 2973}], "download_size": 2541913303, "dataset_size": 2562913548.402}} | 2023-01-04T14:24:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_data_with_generated_captions"
More Information needed | [
"# Dataset Card for \"beautiful_data_with_generated_captions\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_data_with_generated_captions\"\n\nMore Information needed"
] |
1227dd2f87c2c044fb90885ed2ba2dda0f6ff5c6 | 35 dataset images for FloralMarble. Originally created an embedding for statues and busts on a colored background, then mixed that with various other embeddings, resulting in this dataset.
Trained for 500 epochs/steps. 35 images, 4 vectors. Batch size of 7, 5 grad acc steps, learning rate of 0.0025:250,0.001:500.





| spaablauw/FloralMarble_dataset | [
"license:wtfpl",
"region:us"
] | 2023-01-04T13:22:20+00:00 | {"license": "wtfpl"} | 2023-01-04T13:28:07+00:00 | [] | [] | TAGS
#license-wtfpl #region-us
| 35 dataset images for FloralMarble. Originally created an embedding for statues and busts on a colored background, then mixed that with various other embeddings, resulting in this dataset.
Trained for 500 epochs/steps. 35 images, 4 vectors. Batch size of 7, 5 grad acc steps, learning rate of 0.0025:250,0.001:500.
!FloralMarble data (23).png
!FloralMarble data (34).png
!FloralMarble data (5).png
!FloralMarble data (2).png
!FloralMarble data (18).png
| [] | [
"TAGS\n#license-wtfpl #region-us \n"
] |
a7de0e452152e1deffcb8b95c40ce0da323028dd | # Dataset Card for "sketchy-svgs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kmewhort/sketchy-svgs | [
"region:us"
] | 2023-01-04T13:46:16+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "airplane", "1": "alarm_clock", "2": "ant", "3": "ape", "4": "apple", "5": "armor", "6": "axe", "7": "banana", "8": "bat", "9": "bear", "10": "bee", "11": "beetle", "12": "bell", "13": "bench", "14": "bicycle", "15": "blimp", "16": "bread", "17": "butterfly", "18": "cabin", "19": "camel", "20": "candle", "21": "cannon", "22": "car_(sedan)", "23": "castle", "24": "cat", "25": "chair", "26": "chicken", "27": "church", "28": "couch", "29": "cow", "30": "crab", "31": "crocodilian", "32": "cup", "33": "deer", "34": "dog", "35": "dolphin", "36": "door", "37": "duck", "38": "elephant", "39": "eyeglasses", "40": "fan", "41": "fish", "42": "flower", "43": "frog", "44": "geyser", "45": "giraffe", "46": "guitar", "47": "hamburger", "48": "hammer", "49": "harp", "50": "hat", "51": "hedgehog", "52": "helicopter", "53": "hermit_crab", "54": "horse", "55": "hot-air_balloon", "56": "hotdog", "57": "hourglass", "58": "jack-o-lantern", "59": "jellyfish", "60": "kangaroo", "61": "knife", "62": "lion", "63": "lizard", "64": "lobster", "65": "motorcycle", "66": "mouse", "67": "mushroom", "68": "owl", "69": "parrot", "70": "pear", "71": "penguin", "72": "piano", "73": "pickup_truck", "74": "pig", "75": "pineapple", "76": "pistol", "77": "pizza", "78": "pretzel", "79": "rabbit", "80": "raccoon", "81": "racket", "82": "ray", "83": "rhinoceros", "84": "rifle", "85": "rocket", "86": "sailboat", "87": "saw", "88": "saxophone", "89": "scissors", "90": "scorpion", "91": "sea_turtle", "92": "seagull", "93": "seal", "94": "shark", "95": "sheep", "96": "shoe", "97": "skyscraper", "98": "snail", "99": "snake", "100": "songbird", "101": "spider", "102": "spoon", "103": "squirrel", "104": "starfish", "105": "strawberry", "106": "swan", "107": "sword", "108": "table", "109": "tank", "110": "teapot", "111": "teddy_bear", "112": "tiger", "113": "tree", "114": "trumpet", "115": "turtle", "116": "umbrella", "117": "violin", "118": "volcano", "119": "wading_bird", "120": "wheelchair", "121": "windmill", "122": "window", "123": "wine_bottle", "124": "zebra"}}}}, {"name": "svg", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3350400132.1348753, "num_examples": 59966}, {"name": "test", "num_bytes": 837627968.8651245, "num_examples": 14992}], "download_size": 2677218539, "dataset_size": 4188028101.0}} | 2023-01-04T16:20:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sketchy-svgs"
More Information needed | [
"# Dataset Card for \"sketchy-svgs\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sketchy-svgs\"\n\nMore Information needed"
] |
6e44270571f150dd0c42722bb6397632f1e65300 | # AutoTrain Dataset for project: breastcancer
## Dataset Description
This dataset has been automatically processed by AutoTrain for project breastcancer.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x630 L PIL image>",
"target": 0
},
{
"image": "<512x666 L PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['No_cancer', 'cancer'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7998 |
| valid | 2000 |
| hatemestinbejaia/autotrain-data-breastcancer | [
"task_categories:image-classification",
"region:us"
] | 2023-01-04T15:13:32+00:00 | {"task_categories": ["image-classification"]} | 2023-01-04T23:50:49+00:00 | [] | [] | TAGS
#task_categories-image-classification #region-us
| AutoTrain Dataset for project: breastcancer
===========================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project breastcancer.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
88f8f927031cd751c19338fa4af321c46e67ab0d |
# Dataset Card for MDK
This dataset was created as part of the [Bertelsmann Foundation's](https://www.bertelsmann-stiftung.de/de/startseite)
[Musterdatenkatalog (MDK)]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") project. The MDK provides an overview of Open Data in municipalities in Germany. It is intended to help municipalities in Germany, as well as data analysts and journalists, to get an overview of the topics and the extent to which cities have already published data sets.
## Dataset Description
### Dataset Summary
The dataset is an annotated corpus of 1258 records based on the metadata of the datasets from [GOVDATA](https://www.govdata.de/). GovData is a data portal that aims to make cities' data available in a standardized way.
The annotation maps the titles of the datasets to a taxonomy containing categories such as 'Verkehr - KFZ - Messung' or 'Abfallwirtschaft - Abfallkalender'. Through the assignment the names of the data sets can be normalized and grouped. In total, the taxonomy consists 250 categories. Each category is divided into two levels:
- Level 1: "Thema" (topic)

- Level 2: "Bezeichnung" (label).
The first dash divides the levels. For example:

You can find an interactive view of the taxonomy with all labels [here](https://huggingface.co/spaces/and-effect/Musterdatenkatalog).
The repository contains a small and a large version of the data. The small version is for testing purposes only. The large data set contains all 1258 entries. The large and small datasets are split into a training and a testing dataset. In addition, the large dataset folder contains of a validation dataset that has been annotated separately. The validation dataset is an additional dataset that we created for the evaluation of the algorithm. It also consists of data from GOVDATA and has the same structure as the test and training data set.
### Languages
The language data is German.
## Dataset Structure
### Data Fields
| dataset | size |
|-----|-----|
| small/train | 18.96 KB |
| small/test | 6.13 KB |
| large/train | 517.77 KB |
| large/test | 118.66 KB |
An example of looks as follows:
```json
{
"doc_id": "a063d3b7-4c09-421e-9849-073dc8939e76",
"title": "Dienstleistungen Alphabetisch sortiert April 2019",
"description": "CSV-Datei mit allen Dienstleistungen der Kreisverwaltung Kleve. Sortiert nach AlphabetStand 01.04.2019",
"labels_name": "Sonstiges - Sonstiges",
"labels": 166
}
```
The data fields are the same among all splits:
- doc_id (uuid): identifier for each document
- title (str): dataset title from GOVDATA
- description (str): description of the dataset
- labels_name (str): annotation with labels from taxonomy
- labels (int): labels indexed from 0 to 250
### Data Splits
| dataset_name | dataset_splits | train_size | test_size | validation_size
|-----|-----|-----|-----|-----|
| dataset_large | train, test, validation | 1009 | 249 | 101
| dataset_small | train, test | 37 | 13 | None
## Dataset Creation
The dataset was created through multiple manual annotation rounds.
### Source Data
The data comes from [GOVDATA](https://www.govdata.de/), an open data portal of Germany. It aims to provide central access to administrative data from the federal, state and local governments. Their aim is to make data available in one place and thus easier to use. The data available is structured in 13 categories ranging from finance, to international topics, health, education and science and technology. [GOVDATA](https://www.govdata.de/) offers a [CKAN API](https://ckan.govdata.de/) to make requests and provides metadata for each data entry.
#### Initial Data Collection and Normalization
Several sources were used for the annotation process. A sample was collected from [GOVDATA](https://www.govdata.de/) with actual datasets. For the sample, 50 records were drawn for each group. Additional samples are from the previous version of the [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) that contain older data from [GOVDATA](https://www.govdata.de/). Some of the datasets from the old [MDK](https://github.com/bertelsmannstift/Musterdatenkatalog) already contained an annotation, but since the taxonomy is not the same, the data were re-annotated. A sample was drawn from each source (randomly and by manual selection), resulting in a total of 1258 titles.
### Annotations
#### Annotation process
The data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.
The following table shows the results of the of the annotations:
| | **Cohens Kappa** | **Number of Annotators** | **Number of Documents** |
| ------------------ | :--------------: | ------------------------ | ----------------------- |
| **Test Round** | .77 | 6 | 50 |
| **Round 1** | .41 | 2 | 120 |
| **Round 2** | .76 | 4 | 480 |
| **Round 3** | .71 | 3 | 420 |
| **Round 4** | .87 | 2 | 416 |
| **Validation set** | - | 1 | 177 |
In addition, a validation set was generated by the dataset curators.
#### Who are the annotators?
Annotators are all employees from [&effect data solutions GmbH](https://www.and-effect.com/). The taxonomy as well as rules and problems in the assignment of datasets were discussed and debated in advance of the development of the taxonomy and the annotation in two workshops with experts and representatives of the open data community and local governments as well as with the project members of the [Musterdatenkatalog]("https://www.bertelsmann-stiftung.de/de/unsere-projekte/smart-country/musterdatenkatalog") from the Bertelsmann Foundation. On this basis, the [&effect](https://www.and-effect.com/) employees were instructed in the annotation by the curators of the datasets.
## Considerations for Using the Data
The dataset for the annotation process was generated by sampling from [GOVDATA](https://www.govdata.de/) and data previously collected from GOVDATA. The data on GOVDATA is continuously updated and data can get deleted. Thus, there is no guarantee that data entries included here will still be available.
### Social Impact of Dataset
Since 2017, the German government has been promoting systematic and free access to public administration data with first laws on open data in municipalities. In this way, a contribution is aimed at the development of a [knowledge society] (https://www.verwaltung-innovativ.de/DE/Startseite/startseite_node.html). The categorization of open data of cities in a standardized and detailed taxonomy supports this process of making data of municipalities freely, openly and structured accessible.
### Discussion of Biases (non-ethical)
The data was mainly sampled at random from the categories available on GOVDATA. Although all categories were sampled there is still some imbalance in the data. For example: entries for the concept 'Raumordnung, Raumplanung und Raumentwicklung - Bebauungsplan' make up the majority class. Although manual selection of data was also used for not all previous concepts data entries was found. However, for 95% of concepts at least one data entry is available.
## Additional Information
### Dataset Curators
Friederike Bauer
Rahkakavee Baskaran
### Licensing Information
CC BY 4.0 | and-effect/mdk_gov_data_titles_clf | [
"task_categories:text-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:extended",
"language:de",
"license:cc-by-4.0",
"region:us"
] | 2023-01-04T16:20:31+00:00 | {"annotations_creators": "crowdsourced", "language_creators": "other", "language": "de", "license": "cc-by-4.0", "multilinguality": "monolingual", "size_categories": ["1K<n<10K"], "source_datasets": "extended", "task_categories": ["text-classification"], "pretty_name": "GOVDATA dataset titles labelled"} | 2023-05-25T11:43:42+00:00 | [] | [
"de"
] | TAGS
#task_categories-text-classification #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended #language-German #license-cc-by-4.0 #region-us
| Dataset Card for MDK
====================
This dataset was created as part of the Bertelsmann Foundation's
Musterdatenkatalog (MDK) project. The MDK provides an overview of Open Data in municipalities in Germany. It is intended to help municipalities in Germany, as well as data analysts and journalists, to get an overview of the topics and the extent to which cities have already published data sets.
Dataset Description
-------------------
### Dataset Summary
The dataset is an annotated corpus of 1258 records based on the metadata of the datasets from GOVDATA. GovData is a data portal that aims to make cities' data available in a standardized way.
The annotation maps the titles of the datasets to a taxonomy containing categories such as 'Verkehr - KFZ - Messung' or 'Abfallwirtschaft - Abfallkalender'. Through the assignment the names of the data sets can be normalized and grouped. In total, the taxonomy consists 250 categories. Each category is divided into two levels:
* Level 1: "Thema" (topic)

* Level 2: "Bezeichnung" (label).
The first dash divides the levels. For example:

You can find an interactive view of the taxonomy with all labels here.
The repository contains a small and a large version of the data. The small version is for testing purposes only. The large data set contains all 1258 entries. The large and small datasets are split into a training and a testing dataset. In addition, the large dataset folder contains of a validation dataset that has been annotated separately. The validation dataset is an additional dataset that we created for the evaluation of the algorithm. It also consists of data from GOVDATA and has the same structure as the test and training data set.
### Languages
The language data is German.
Dataset Structure
-----------------
### Data Fields
An example of looks as follows:
The data fields are the same among all splits:
* doc\_id (uuid): identifier for each document
* title (str): dataset title from GOVDATA
* description (str): description of the dataset
* labels\_name (str): annotation with labels from taxonomy
* labels (int): labels indexed from 0 to 250
### Data Splits
Dataset Creation
----------------
The dataset was created through multiple manual annotation rounds.
### Source Data
The data comes from GOVDATA, an open data portal of Germany. It aims to provide central access to administrative data from the federal, state and local governments. Their aim is to make data available in one place and thus easier to use. The data available is structured in 13 categories ranging from finance, to international topics, health, education and science and technology. GOVDATA offers a CKAN API to make requests and provides metadata for each data entry.
#### Initial Data Collection and Normalization
Several sources were used for the annotation process. A sample was collected from GOVDATA with actual datasets. For the sample, 50 records were drawn for each group. Additional samples are from the previous version of the MDK that contain older data from GOVDATA. Some of the datasets from the old MDK already contained an annotation, but since the taxonomy is not the same, the data were re-annotated. A sample was drawn from each source (randomly and by manual selection), resulting in a total of 1258 titles.
### Annotations
#### Annotation process
The data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.
The following table shows the results of the of the annotations:
In addition, a validation set was generated by the dataset curators.
#### Who are the annotators?
Annotators are all employees from &effect data solutions GmbH. The taxonomy as well as rules and problems in the assignment of datasets were discussed and debated in advance of the development of the taxonomy and the annotation in two workshops with experts and representatives of the open data community and local governments as well as with the project members of the Musterdatenkatalog from the Bertelsmann Foundation. On this basis, the &effect employees were instructed in the annotation by the curators of the datasets.
Considerations for Using the Data
---------------------------------
The dataset for the annotation process was generated by sampling from GOVDATA and data previously collected from GOVDATA. The data on GOVDATA is continuously updated and data can get deleted. Thus, there is no guarantee that data entries included here will still be available.
### Social Impact of Dataset
Since 2017, the German government has been promoting systematic and free access to public administration data with first laws on open data in municipalities. In this way, a contribution is aimed at the development of a [knowledge society] (URL The categorization of open data of cities in a standardized and detailed taxonomy supports this process of making data of municipalities freely, openly and structured accessible.
### Discussion of Biases (non-ethical)
The data was mainly sampled at random from the categories available on GOVDATA. Although all categories were sampled there is still some imbalance in the data. For example: entries for the concept 'Raumordnung, Raumplanung und Raumentwicklung - Bebauungsplan' make up the majority class. Although manual selection of data was also used for not all previous concepts data entries was found. However, for 95% of concepts at least one data entry is available.
Additional Information
----------------------
### Dataset Curators
Friederike Bauer
Rahkakavee Baskaran
### Licensing Information
CC BY 4.0
| [
"### Dataset Summary\n\n\nThe dataset is an annotated corpus of 1258 records based on the metadata of the datasets from GOVDATA. GovData is a data portal that aims to make cities' data available in a standardized way.\n\n\nThe annotation maps the titles of the datasets to a taxonomy containing categories such as 'Verkehr - KFZ - Messung' or 'Abfallwirtschaft - Abfallkalender'. Through the assignment the names of the data sets can be normalized and grouped. In total, the taxonomy consists 250 categories. Each category is divided into two levels:\n\n\n* Level 1: \"Thema\" (topic)\n\n* Level 2: \"Bezeichnung\" (label).\n\n\nThe first dash divides the levels. For example:\n\n\n\nYou can find an interactive view of the taxonomy with all labels here.\n\n\nThe repository contains a small and a large version of the data. The small version is for testing purposes only. The large data set contains all 1258 entries. The large and small datasets are split into a training and a testing dataset. In addition, the large dataset folder contains of a validation dataset that has been annotated separately. The validation dataset is an additional dataset that we created for the evaluation of the algorithm. It also consists of data from GOVDATA and has the same structure as the test and training data set.",
"### Languages\n\n\nThe language data is German.\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\n\nAn example of looks as follows:\n\n\nThe data fields are the same among all splits:\n\n\n* doc\\_id (uuid): identifier for each document\n* title (str): dataset title from GOVDATA\n* description (str): description of the dataset\n* labels\\_name (str): annotation with labels from taxonomy\n* labels (int): labels indexed from 0 to 250",
"### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created through multiple manual annotation rounds.",
"### Source Data\n\n\nThe data comes from GOVDATA, an open data portal of Germany. It aims to provide central access to administrative data from the federal, state and local governments. Their aim is to make data available in one place and thus easier to use. The data available is structured in 13 categories ranging from finance, to international topics, health, education and science and technology. GOVDATA offers a CKAN API to make requests and provides metadata for each data entry.",
"#### Initial Data Collection and Normalization\n\n\nSeveral sources were used for the annotation process. A sample was collected from GOVDATA with actual datasets. For the sample, 50 records were drawn for each group. Additional samples are from the previous version of the MDK that contain older data from GOVDATA. Some of the datasets from the old MDK already contained an annotation, but since the taxonomy is not the same, the data were re-annotated. A sample was drawn from each source (randomly and by manual selection), resulting in a total of 1258 titles.",
"### Annotations",
"#### Annotation process\n\n\nThe data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.\nThe following table shows the results of the of the annotations:\n\n\n\nIn addition, a validation set was generated by the dataset curators.",
"#### Who are the annotators?\n\n\nAnnotators are all employees from &effect data solutions GmbH. The taxonomy as well as rules and problems in the assignment of datasets were discussed and debated in advance of the development of the taxonomy and the annotation in two workshops with experts and representatives of the open data community and local governments as well as with the project members of the Musterdatenkatalog from the Bertelsmann Foundation. On this basis, the &effect employees were instructed in the annotation by the curators of the datasets.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset for the annotation process was generated by sampling from GOVDATA and data previously collected from GOVDATA. The data on GOVDATA is continuously updated and data can get deleted. Thus, there is no guarantee that data entries included here will still be available.",
"### Social Impact of Dataset\n\n\nSince 2017, the German government has been promoting systematic and free access to public administration data with first laws on open data in municipalities. In this way, a contribution is aimed at the development of a [knowledge society] (URL The categorization of open data of cities in a standardized and detailed taxonomy supports this process of making data of municipalities freely, openly and structured accessible.",
"### Discussion of Biases (non-ethical)\n\n\nThe data was mainly sampled at random from the categories available on GOVDATA. Although all categories were sampled there is still some imbalance in the data. For example: entries for the concept 'Raumordnung, Raumplanung und Raumentwicklung - Bebauungsplan' make up the majority class. Although manual selection of data was also used for not all previous concepts data entries was found. However, for 95% of concepts at least one data entry is available.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nFriederike Bauer\n\n\nRahkakavee Baskaran",
"### Licensing Information\n\n\nCC BY 4.0"
] | [
"TAGS\n#task_categories-text-classification #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-extended #language-German #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nThe dataset is an annotated corpus of 1258 records based on the metadata of the datasets from GOVDATA. GovData is a data portal that aims to make cities' data available in a standardized way.\n\n\nThe annotation maps the titles of the datasets to a taxonomy containing categories such as 'Verkehr - KFZ - Messung' or 'Abfallwirtschaft - Abfallkalender'. Through the assignment the names of the data sets can be normalized and grouped. In total, the taxonomy consists 250 categories. Each category is divided into two levels:\n\n\n* Level 1: \"Thema\" (topic)\n\n* Level 2: \"Bezeichnung\" (label).\n\n\nThe first dash divides the levels. For example:\n\n\n\nYou can find an interactive view of the taxonomy with all labels here.\n\n\nThe repository contains a small and a large version of the data. The small version is for testing purposes only. The large data set contains all 1258 entries. The large and small datasets are split into a training and a testing dataset. In addition, the large dataset folder contains of a validation dataset that has been annotated separately. The validation dataset is an additional dataset that we created for the evaluation of the algorithm. It also consists of data from GOVDATA and has the same structure as the test and training data set.",
"### Languages\n\n\nThe language data is German.\n\n\nDataset Structure\n-----------------",
"### Data Fields\n\n\n\nAn example of looks as follows:\n\n\nThe data fields are the same among all splits:\n\n\n* doc\\_id (uuid): identifier for each document\n* title (str): dataset title from GOVDATA\n* description (str): description of the dataset\n* labels\\_name (str): annotation with labels from taxonomy\n* labels (int): labels indexed from 0 to 250",
"### Data Splits\n\n\n\nDataset Creation\n----------------\n\n\nThe dataset was created through multiple manual annotation rounds.",
"### Source Data\n\n\nThe data comes from GOVDATA, an open data portal of Germany. It aims to provide central access to administrative data from the federal, state and local governments. Their aim is to make data available in one place and thus easier to use. The data available is structured in 13 categories ranging from finance, to international topics, health, education and science and technology. GOVDATA offers a CKAN API to make requests and provides metadata for each data entry.",
"#### Initial Data Collection and Normalization\n\n\nSeveral sources were used for the annotation process. A sample was collected from GOVDATA with actual datasets. For the sample, 50 records were drawn for each group. Additional samples are from the previous version of the MDK that contain older data from GOVDATA. Some of the datasets from the old MDK already contained an annotation, but since the taxonomy is not the same, the data were re-annotated. A sample was drawn from each source (randomly and by manual selection), resulting in a total of 1258 titles.",
"### Annotations",
"#### Annotation process\n\n\nThe data was annotated in four rounds and one additional test round. In each round a percentage of the data was allocated to all annotators to caluculate the inter-annotator agreement using Cohens Kappa.\nThe following table shows the results of the of the annotations:\n\n\n\nIn addition, a validation set was generated by the dataset curators.",
"#### Who are the annotators?\n\n\nAnnotators are all employees from &effect data solutions GmbH. The taxonomy as well as rules and problems in the assignment of datasets were discussed and debated in advance of the development of the taxonomy and the annotation in two workshops with experts and representatives of the open data community and local governments as well as with the project members of the Musterdatenkatalog from the Bertelsmann Foundation. On this basis, the &effect employees were instructed in the annotation by the curators of the datasets.\n\n\nConsiderations for Using the Data\n---------------------------------\n\n\nThe dataset for the annotation process was generated by sampling from GOVDATA and data previously collected from GOVDATA. The data on GOVDATA is continuously updated and data can get deleted. Thus, there is no guarantee that data entries included here will still be available.",
"### Social Impact of Dataset\n\n\nSince 2017, the German government has been promoting systematic and free access to public administration data with first laws on open data in municipalities. In this way, a contribution is aimed at the development of a [knowledge society] (URL The categorization of open data of cities in a standardized and detailed taxonomy supports this process of making data of municipalities freely, openly and structured accessible.",
"### Discussion of Biases (non-ethical)\n\n\nThe data was mainly sampled at random from the categories available on GOVDATA. Although all categories were sampled there is still some imbalance in the data. For example: entries for the concept 'Raumordnung, Raumplanung und Raumentwicklung - Bebauungsplan' make up the majority class. Although manual selection of data was also used for not all previous concepts data entries was found. However, for 95% of concepts at least one data entry is available.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nFriederike Bauer\n\n\nRahkakavee Baskaran",
"### Licensing Information\n\n\nCC BY 4.0"
] |
410cdf9488714b70e20de89b217e95f856c67030 | # Dataset Card for "test-captioned-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Umal-exvc/test-captioned-dataset | [
"region:us"
] | 2023-01-04T16:23:40+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 111187.0, "num_examples": 5}], "download_size": 111705, "dataset_size": 111187.0}} | 2023-01-04T16:23:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test-captioned-dataset"
More Information needed | [
"# Dataset Card for \"test-captioned-dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test-captioned-dataset\"\n\nMore Information needed"
] |
8a57a3124b980bf171ddeaecb5fb2b7a39374689 | # Dataset Card for "tu-berlin-svgs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kmewhort/tu-berlin-svgs | [
"region:us"
] | 2023-01-04T16:34:42+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "airplane", "1": "alarm clock", "2": "angel", "3": "ant", "4": "apple", "5": "arm", "6": "armchair", "7": "ashtray", "8": "axe", "9": "backpack", "10": "banana", "11": "barn", "12": "baseball bat", "13": "basket", "14": "bathtub", "15": "bear (animal)", "16": "bed", "17": "bee", "18": "beer-mug", "19": "bell", "20": "bench", "21": "bicycle", "22": "binoculars", "23": "blimp", "24": "book", "25": "bookshelf", "26": "boomerang", "27": "bottle opener", "28": "bowl", "29": "brain", "30": "bread", "31": "bridge", "32": "bulldozer", "33": "bus", "34": "bush", "35": "butterfly", "36": "cabinet", "37": "cactus", "38": "cake", "39": "calculator", "40": "camel", "41": "camera", "42": "candle", "43": "cannon", "44": "canoe", "45": "car (sedan)", "46": "carrot", "47": "castle", "48": "cat", "49": "cell phone", "50": "chair", "51": "chandelier", "52": "church", "53": "cigarette", "54": "cloud", "55": "comb", "56": "computer monitor", "57": "computer-mouse", "58": "couch", "59": "cow", "60": "crab", "61": "crane (machine)", "62": "crocodile", "63": "crown", "64": "cup", "65": "diamond", "66": "dog", "67": "dolphin", "68": "donut", "69": "door", "70": "door handle", "71": "dragon", "72": "duck", "73": "ear", "74": "elephant", "75": "envelope", "76": "eye", "77": "eyeglasses", "78": "face", "79": "fan", "80": "feather", "81": "fire hydrant", "82": "fish", "83": "flashlight", "84": "floor lamp", "85": "flower with stem", "86": "flying bird", "87": "flying saucer", "88": "foot", "89": "fork", "90": "frog", "91": "frying-pan", "92": "giraffe", "93": "grapes", "94": "grenade", "95": "guitar", "96": "hamburger", "97": "hammer", "98": "hand", "99": "harp", "100": "hat", "101": "head", "102": "head-phones", "103": "hedgehog", "104": "helicopter", "105": "helmet", "106": "horse", "107": "hot air balloon", "108": "hot-dog", "109": "hourglass", "110": "house", "111": "human-skeleton", "112": "ice-cream-cone", "113": "ipod", "114": "kangaroo", "115": "key", "116": "keyboard", "117": "knife", "118": "ladder", "119": "laptop", "120": "leaf", "121": "lightbulb", "122": "lighter", "123": "lion", "124": "lobster", "125": "loudspeaker", "126": "mailbox", "127": "megaphone", "128": "mermaid", "129": "microphone", "130": "microscope", "131": "monkey", "132": "moon", "133": "mosquito", "134": "motorbike", "135": "mouse (animal)", "136": "mouth", "137": "mug", "138": "mushroom", "139": "nose", "140": "octopus", "141": "owl", "142": "palm tree", "143": "panda", "144": "paper clip", "145": "parachute", "146": "parking meter", "147": "parrot", "148": "pear", "149": "pen", "150": "penguin", "151": "person sitting", "152": "person walking", "153": "piano", "154": "pickup truck", "155": "pig", "156": "pigeon", "157": "pineapple", "158": "pipe (for smoking)", "159": "pizza", "160": "potted plant", "161": "power outlet", "162": "present", "163": "pretzel", "164": "pumpkin", "165": "purse", "166": "rabbit", "167": "race car", "168": "radio", "169": "rainbow", "170": "revolver", "171": "rifle", "172": "rollerblades", "173": "rooster", "174": "sailboat", "175": "santa claus", "176": "satellite", "177": "satellite dish", "178": "saxophone", "179": "scissors", "180": "scorpion", "181": "screwdriver", "182": "sea turtle", "183": "seagull", "184": "shark", "185": "sheep", "186": "ship", "187": "shoe", "188": "shovel", "189": "skateboard", "190": "skull", "191": "skyscraper", "192": "snail", "193": "snake", "194": "snowboard", "195": "snowman", "196": "socks", "197": "space shuttle", "198": "speed-boat", "199": "spider", "200": "sponge bob", "201": "spoon", "202": "squirrel", "203": "standing bird", "204": "stapler", "205": "strawberry", "206": "streetlight", "207": "submarine", "208": "suitcase", "209": "sun", "210": "suv", "211": "swan", "212": "sword", "213": "syringe", "214": "t-shirt", "215": "table", "216": "tablelamp", "217": "teacup", "218": "teapot", "219": "teddy-bear", "220": "telephone", "221": "tennis-racket", "222": "tent", "223": "tiger", "224": "tire", "225": "toilet", "226": "tomato", "227": "tooth", "228": "toothbrush", "229": "tractor", "230": "traffic light", "231": "train", "232": "tree", "233": "trombone", "234": "trousers", "235": "truck", "236": "trumpet", "237": "tv", "238": "umbrella", "239": "van", "240": "vase", "241": "violin", "242": "walkie talkie", "243": "wheel", "244": "wheelbarrow", "245": "windmill", "246": "wine-bottle", "247": "wineglass", "248": "wrist-watch", "249": "zebra"}}}}, {"name": "svg", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82640829.32506625, "num_examples": 15999}, {"name": "test", "num_bytes": 20661498.674933746, "num_examples": 4000}], "download_size": 65748314, "dataset_size": 103302328.0}} | 2023-01-10T19:20:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tu-berlin-svgs"
More Information needed | [
"# Dataset Card for \"tu-berlin-svgs\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tu-berlin-svgs\"\n\nMore Information needed"
] |
d3ee81f581595ba9a1b08989000c0a4240ac6892 | # Dataset Card for "OxfordPets_facebook_opt_350m_LLM_Description_gpt3_downstream_tasks_ViT_L_14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/OxfordPets_facebook_opt_350m_LLM_Description_gpt3_downstream_tasks_ViT_L_14 | [
"region:us"
] | 2023-01-04T17:07:09+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "true_label", "dtype": "string"}, {"name": "prediction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 119984114.375, "num_examples": 3669}], "download_size": 119029045, "dataset_size": 119984114.375}} | 2023-01-04T17:07:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "OxfordPets_facebook_opt_350m_LLM_Description_gpt3_downstream_tasks_ViT_L_14"
More Information needed | [
"# Dataset Card for \"OxfordPets_facebook_opt_350m_LLM_Description_gpt3_downstream_tasks_ViT_L_14\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"OxfordPets_facebook_opt_350m_LLM_Description_gpt3_downstream_tasks_ViT_L_14\"\n\nMore Information needed"
] |
ba4684a1a6f7d00b82a58925777269bd7ff7f2c5 | # Dataset Card for Zinc20
## Dataset Description
- **Homepage:** https://zinc20.docking.org/
- **Paper:** https://pubs.acs.org/doi/10.1021/acs.jcim.0c00675
### Dataset Summary
ZINC is a publicly available database that aggregates commercially available and annotated compounds.
ZINC provides downloadable 2D and 3D versions as well as a website that enables rapid molecule lookup and analog search.
ZINC has grown from fewer than 1 million compounds in 2005 to nearly 2 billion now.
This dataset includes ~1B molecules in total. We have filtered out any compounds that were not avaible to be converted from `smiles` to `seflies` representations.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
The dataset is split into an 80/10/10 train/valid/test random split across files (which roughly corresponds to the same percentages)
### Source Data
#### Initial Data Collection and Normalization
Initial data was released at https://zinc20.docking.org/. We have downloaded and added a `selfies` field and filtered out all molecules that did not contain molecules that could be converted to `selfies` representations.
### Citation Information
@article{Irwin2020,
doi = {10.1021/acs.jcim.0c00675},
url = {https://doi.org/10.1021/acs.jcim.0c00675},
year = {2020},
month = oct,
publisher = {American Chemical Society ({ACS})},
volume = {60},
number = {12},
pages = {6065--6073},
author = {John J. Irwin and Khanh G. Tang and Jennifer Young and Chinzorig Dandarchuluun and Benjamin R. Wong and Munkhzul Khurelbaatar and Yurii S. Moroz and John Mayfield and Roger A. Sayle},
title = {{ZINC}20{\textemdash}A Free Ultralarge-Scale Chemical Database for Ligand Discovery},
journal = {Journal of Chemical Information and Modeling}
}
### Contributions
This dataset was curated and added by [@zanussbaum](https://github.com/zanussbaum).
| zpn/zinc20 | [
"size_categories:1B<n<10B",
"license:mit",
"bio",
"selfies",
"smiles",
"small_molecules",
"region:us"
] | 2023-01-04T17:32:47+00:00 | {"license": "mit", "size_categories": ["1B<n<10B"], "pretty_name": "zinc20", "dataset_info": {"features": [{"name": "selfies", "dtype": "string"}, {"name": "smiles", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 238295712864, "num_examples": 804925861}, {"name": "validation", "num_bytes": 26983481360, "num_examples": 100642661}, {"name": "test", "num_bytes": 29158755632, "num_examples": 101082073}], "download_size": 40061255073, "dataset_size": 294437949856}, "tags": ["bio", "selfies", "smiles", "small_molecules"]} | 2023-01-06T02:03:46+00:00 | [] | [] | TAGS
#size_categories-1B<n<10B #license-mit #bio #selfies #smiles #small_molecules #region-us
| # Dataset Card for Zinc20
## Dataset Description
- Homepage: URL
- Paper: URL
### Dataset Summary
ZINC is a publicly available database that aggregates commercially available and annotated compounds.
ZINC provides downloadable 2D and 3D versions as well as a website that enables rapid molecule lookup and analog search.
ZINC has grown from fewer than 1 million compounds in 2005 to nearly 2 billion now.
This dataset includes ~1B molecules in total. We have filtered out any compounds that were not avaible to be converted from 'smiles' to 'seflies' representations.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
The dataset is split into an 80/10/10 train/valid/test random split across files (which roughly corresponds to the same percentages)
### Source Data
#### Initial Data Collection and Normalization
Initial data was released at URL We have downloaded and added a 'selfies' field and filtered out all molecules that did not contain molecules that could be converted to 'selfies' representations.
@article{Irwin2020,
doi = {10.1021/URL.0c00675},
url = {URL
year = {2020},
month = oct,
publisher = {American Chemical Society ({ACS})},
volume = {60},
number = {12},
pages = {6065--6073},
author = {John J. Irwin and Khanh G. Tang and Jennifer Young and Chinzorig Dandarchuluun and Benjamin R. Wong and Munkhzul Khurelbaatar and Yurii S. Moroz and John Mayfield and Roger A. Sayle},
title = {{ZINC}20{\textemdash}A Free Ultralarge-Scale Chemical Database for Ligand Discovery},
journal = {Journal of Chemical Information and Modeling}
}
### Contributions
This dataset was curated and added by @zanussbaum.
| [
"# Dataset Card for Zinc20",
"## Dataset Description\n\n- Homepage: URL\n- Paper: URL",
"### Dataset Summary\n\nZINC is a publicly available database that aggregates commercially available and annotated compounds. \nZINC provides downloadable 2D and 3D versions as well as a website that enables rapid molecule lookup and analog search.\nZINC has grown from fewer than 1 million compounds in 2005 to nearly 2 billion now.\nThis dataset includes ~1B molecules in total. We have filtered out any compounds that were not avaible to be converted from 'smiles' to 'seflies' representations.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\nThe dataset is split into an 80/10/10 train/valid/test random split across files (which roughly corresponds to the same percentages)",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nInitial data was released at URL We have downloaded and added a 'selfies' field and filtered out all molecules that did not contain molecules that could be converted to 'selfies' representations.\n\n\n\n@article{Irwin2020,\n doi = {10.1021/URL.0c00675},\n url = {URL\n year = {2020},\n month = oct,\n publisher = {American Chemical Society ({ACS})},\n volume = {60},\n number = {12},\n pages = {6065--6073},\n author = {John J. Irwin and Khanh G. Tang and Jennifer Young and Chinzorig Dandarchuluun and Benjamin R. Wong and Munkhzul Khurelbaatar and Yurii S. Moroz and John Mayfield and Roger A. Sayle},\n title = {{ZINC}20{\\textemdash}A Free Ultralarge-Scale Chemical Database for Ligand Discovery},\n journal = {Journal of Chemical Information and Modeling}\n}",
"### Contributions\n\nThis dataset was curated and added by @zanussbaum."
] | [
"TAGS\n#size_categories-1B<n<10B #license-mit #bio #selfies #smiles #small_molecules #region-us \n",
"# Dataset Card for Zinc20",
"## Dataset Description\n\n- Homepage: URL\n- Paper: URL",
"### Dataset Summary\n\nZINC is a publicly available database that aggregates commercially available and annotated compounds. \nZINC provides downloadable 2D and 3D versions as well as a website that enables rapid molecule lookup and analog search.\nZINC has grown from fewer than 1 million compounds in 2005 to nearly 2 billion now.\nThis dataset includes ~1B molecules in total. We have filtered out any compounds that were not avaible to be converted from 'smiles' to 'seflies' representations.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\nThe dataset is split into an 80/10/10 train/valid/test random split across files (which roughly corresponds to the same percentages)",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nInitial data was released at URL We have downloaded and added a 'selfies' field and filtered out all molecules that did not contain molecules that could be converted to 'selfies' representations.\n\n\n\n@article{Irwin2020,\n doi = {10.1021/URL.0c00675},\n url = {URL\n year = {2020},\n month = oct,\n publisher = {American Chemical Society ({ACS})},\n volume = {60},\n number = {12},\n pages = {6065--6073},\n author = {John J. Irwin and Khanh G. Tang and Jennifer Young and Chinzorig Dandarchuluun and Benjamin R. Wong and Munkhzul Khurelbaatar and Yurii S. Moroz and John Mayfield and Roger A. Sayle},\n title = {{ZINC}20{\\textemdash}A Free Ultralarge-Scale Chemical Database for Ligand Discovery},\n journal = {Journal of Chemical Information and Modeling}\n}",
"### Contributions\n\nThis dataset was curated and added by @zanussbaum."
] |
ccfc48a7e02b349c04c506937c014b85945130ee |
### Roboflow Dataset Page
https://universe.roboflow.com/smoke-detection/smoke100-uwe4t/dataset/4
### Dataset Labels
```
['smoke']
```
### Citation
```
@misc{ smoke100-uwe4t_dataset,
title = { Smoke100 Dataset },
type = { Open Source Dataset },
author = { Smoke Detection },
howpublished = { \\url{ https://universe.roboflow.com/smoke-detection/smoke100-uwe4t } },
url = { https://universe.roboflow.com/smoke-detection/smoke100-uwe4t },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-02 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.ai on March 17, 2022 at 3:42 PM GMT
It includes 21578 images.
Smoke are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| keremberke/smoke-object-detection | [
"task_categories:object-detection",
"roboflow",
"region:us"
] | 2023-01-04T20:41:37+00:00 | {"task_categories": ["object-detection"], "tags": ["roboflow"]} | 2023-01-04T20:54:45+00:00 | [] | [] | TAGS
#task_categories-object-detection #roboflow #region-us
|
### Roboflow Dataset Page
URL
### Dataset Labels
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on March 17, 2022 at 3:42 PM GMT
It includes 21578 images.
Smoke are annotated in COCO format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
No image augmentation techniques were applied.
| [
"### Roboflow Dataset Page\nURL",
"### Dataset Labels",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on March 17, 2022 at 3:42 PM GMT\n\nIt includes 21578 images.\nSmoke are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
] | [
"TAGS\n#task_categories-object-detection #roboflow #region-us \n",
"### Roboflow Dataset Page\nURL",
"### Dataset Labels",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on March 17, 2022 at 3:42 PM GMT\n\nIt includes 21578 images.\nSmoke are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nNo image augmentation techniques were applied."
] |
b526fa706ed98817ebf35bf66bf5c27f5174dffc | # Dataset Card for "septuagint"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | epaolinos/septuagint | [
"region:us"
] | 2023-01-04T21:31:08+00:00 | {"dataset_info": {"features": [{"name": "Book", "dtype": "string"}, {"name": "Chapter", "dtype": "int64"}, {"name": "Verse Number", "dtype": "int64"}, {"name": "Verse Text", "dtype": "string"}, {"name": "Genre", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9101054, "num_examples": 30568}], "download_size": 3421032, "dataset_size": 9101054}} | 2023-01-04T21:31:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "septuagint"
More Information needed | [
"# Dataset Card for \"septuagint\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"septuagint\"\n\nMore Information needed"
] |
e78e05770d11783d5a49429b17f2dc157730a7f3 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Amy12zz/dreambooth-hackathon-images | [
"region:us"
] | 2023-01-04T22:05:18+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1047395.0, "num_examples": 4}], "download_size": 1047434, "dataset_size": 1047395.0}} | 2023-01-04T22:05:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dreambooth-hackathon-images"
More Information needed | [
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] |
ea57ec2a2257d517f92775b6ce1083df76837ee0 | # Dataset Card for "processed_sroie_donut_dataset_json2token"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ivelin/processed_sroie_donut_dataset_json2token | [
"region:us"
] | 2023-01-05T00:19:04+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 586245601.0, "num_examples": 626}], "download_size": 577293738, "dataset_size": 586245601.0}} | 2023-01-05T00:19:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "processed_sroie_donut_dataset_json2token"
More Information needed | [
"# Dataset Card for \"processed_sroie_donut_dataset_json2token\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_sroie_donut_dataset_json2token\"\n\nMore Information needed"
] |
b19eec145643c02045f384962d69ab4cb98ed6fb | # Dataset Card for "processed_sroie_donut_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ivelin/processed_sroie_donut_dataset | [
"region:us"
] | 2023-01-05T00:28:11+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "sequence": "int64"}, {"name": "target_sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9243064809, "num_examples": 626}], "download_size": 919646545, "dataset_size": 9243064809}} | 2023-01-05T01:01:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "processed_sroie_donut_dataset"
More Information needed | [
"# Dataset Card for \"processed_sroie_donut_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_sroie_donut_dataset\"\n\nMore Information needed"
] |
eda2739be485cc048f79950aa94fa84a62ed4d61 | # Dataset Card for "processed_sroie_donut_dataset_train_test_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ivelin/processed_sroie_donut_dataset_train_test_split | [
"region:us"
] | 2023-01-05T00:36:08+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "sequence": "int64"}, {"name": "target_sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8312852216.400958, "num_examples": 563}, {"name": "test", "num_bytes": 930212592.5990416, "num_examples": 63}], "download_size": 919833989, "dataset_size": 9243064809.0}} | 2023-01-05T01:05:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "processed_sroie_donut_dataset_train_test_split"
More Information needed | [
"# Dataset Card for \"processed_sroie_donut_dataset_train_test_split\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_sroie_donut_dataset_train_test_split\"\n\nMore Information needed"
] |
4f585f6a98085f8b05ef4df964f6e93de1ced0c8 | Images | ariciano/images | [
"region:us"
] | 2023-01-05T00:51:29+00:00 | {} | 2023-01-05T01:06:49+00:00 | [] | [] | TAGS
#region-us
| Images | [] | [
"TAGS\n#region-us \n"
] |
e2824b6afcb102d19833d33712b1b6d56c712a9e |
# Dataset Card for `antique`
The `antique` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=403,666
This dataset is used by: [`antique_test`](https://huggingface.co/datasets/irds/antique_test), [`antique_test_non-offensive`](https://huggingface.co/datasets/irds/antique_test_non-offensive), [`antique_train`](https://huggingface.co/datasets/irds/antique_train), [`antique_train_split200-train`](https://huggingface.co/datasets/irds/antique_train_split200-train), [`antique_train_split200-valid`](https://huggingface.co/datasets/irds/antique_train_split200-valid)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/antique', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
| irds/antique | [
"task_categories:text-retrieval",
"region:us"
] | 2023-01-05T01:47:04+00:00 | {"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`antique`", "viewer": false} | 2023-01-05T02:43:08+00:00 | [] | [] | TAGS
#task_categories-text-retrieval #region-us
|
# Dataset Card for 'antique'
The 'antique' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'docs' (documents, i.e., the corpus); count=403,666
This dataset is used by: 'antique_test', 'antique_test_non-offensive', 'antique_train', 'antique_train_split200-train', 'antique_train_split200-valid'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'antique'\n\nThe 'antique' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=403,666\n\n\nThis dataset is used by: 'antique_test', 'antique_test_non-offensive', 'antique_train', 'antique_train_split200-train', 'antique_train_split200-valid'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #region-us \n",
"# Dataset Card for 'antique'\n\nThe 'antique' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=403,666\n\n\nThis dataset is used by: 'antique_test', 'antique_test_non-offensive', 'antique_train', 'antique_train_split200-train', 'antique_train_split200-valid'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
62a770a02ce76920e093391a25806e14cd3bfd82 |
# Dataset Card for `antique/test`
The `antique/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=200
- `qrels`: (relevance assessments); count=6,589
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
| irds/antique_test | [
"task_categories:text-retrieval",
"source_datasets:irds/antique",
"region:us"
] | 2023-01-05T02:18:42+00:00 | {"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/test`", "viewer": false} | 2023-01-05T02:43:12+00:00 | [] | [] | TAGS
#task_categories-text-retrieval #source_datasets-irds/antique #region-us
|
# Dataset Card for 'antique/test'
The 'antique/test' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'queries' (i.e., topics); count=200
- 'qrels': (relevance assessments); count=6,589
- For 'docs', use 'irds/antique'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'antique/test'\n\nThe 'antique/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=200\n - 'qrels': (relevance assessments); count=6,589\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #source_datasets-irds/antique #region-us \n",
"# Dataset Card for 'antique/test'\n\nThe 'antique/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=200\n - 'qrels': (relevance assessments); count=6,589\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
25fec169c670281089ac223682bd521eb0f005fe |
# Dataset Card for `antique/test/non-offensive`
The `antique/test/non-offensive` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/test/non-offensive).
# Data
This dataset provides:
- `queries` (i.e., topics); count=176
- `qrels`: (relevance assessments); count=5,752
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_test_non-offensive', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_test_non-offensive', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
| irds/antique_test_non-offensive | [
"task_categories:text-retrieval",
"source_datasets:irds/antique",
"region:us"
] | 2023-01-05T02:18:53+00:00 | {"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/test/non-offensive`", "viewer": false} | 2023-01-05T02:43:17+00:00 | [] | [] | TAGS
#task_categories-text-retrieval #source_datasets-irds/antique #region-us
|
# Dataset Card for 'antique/test/non-offensive'
The 'antique/test/non-offensive' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'queries' (i.e., topics); count=176
- 'qrels': (relevance assessments); count=5,752
- For 'docs', use 'irds/antique'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'antique/test/non-offensive'\n\nThe 'antique/test/non-offensive' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=176\n - 'qrels': (relevance assessments); count=5,752\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #source_datasets-irds/antique #region-us \n",
"# Dataset Card for 'antique/test/non-offensive'\n\nThe 'antique/test/non-offensive' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=176\n - 'qrels': (relevance assessments); count=5,752\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
1d244f247e15d199b38ee5b410a63958809fbd02 |
# Dataset Card for `antique/train`
The `antique/train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=2,426
- `qrels`: (relevance assessments); count=27,422
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
| irds/antique_train | [
"task_categories:text-retrieval",
"source_datasets:irds/antique",
"region:us"
] | 2023-01-05T02:19:05+00:00 | {"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/train`", "viewer": false} | 2023-01-05T02:43:21+00:00 | [] | [] | TAGS
#task_categories-text-retrieval #source_datasets-irds/antique #region-us
|
# Dataset Card for 'antique/train'
The 'antique/train' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'queries' (i.e., topics); count=2,426
- 'qrels': (relevance assessments); count=27,422
- For 'docs', use 'irds/antique'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'antique/train'\n\nThe 'antique/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=2,426\n - 'qrels': (relevance assessments); count=27,422\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #source_datasets-irds/antique #region-us \n",
"# Dataset Card for 'antique/train'\n\nThe 'antique/train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=2,426\n - 'qrels': (relevance assessments); count=27,422\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
dee4e966f8d8f0de94b7d7b627d6a1b83bc5aeec |
# Dataset Card for `antique/train/split200-train`
The `antique/train/split200-train` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/train/split200-train).
# Data
This dataset provides:
- `queries` (i.e., topics); count=2,226
- `qrels`: (relevance assessments); count=25,229
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_train_split200-train', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_train_split200-train', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
| irds/antique_train_split200-train | [
"task_categories:text-retrieval",
"source_datasets:irds/antique",
"region:us"
] | 2023-01-05T02:19:16+00:00 | {"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/train/split200-train`", "viewer": false} | 2023-01-05T02:43:26+00:00 | [] | [] | TAGS
#task_categories-text-retrieval #source_datasets-irds/antique #region-us
|
# Dataset Card for 'antique/train/split200-train'
The 'antique/train/split200-train' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'queries' (i.e., topics); count=2,226
- 'qrels': (relevance assessments); count=25,229
- For 'docs', use 'irds/antique'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'antique/train/split200-train'\n\nThe 'antique/train/split200-train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=2,226\n - 'qrels': (relevance assessments); count=25,229\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #source_datasets-irds/antique #region-us \n",
"# Dataset Card for 'antique/train/split200-train'\n\nThe 'antique/train/split200-train' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=2,226\n - 'qrels': (relevance assessments); count=25,229\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
34cb5fbfb733863d08bc185708ce45b66cc3f088 |
# Dataset Card for `antique/train/split200-valid`
The `antique/train/split200-valid` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/antique#antique/train/split200-valid).
# Data
This dataset provides:
- `queries` (i.e., topics); count=200
- `qrels`: (relevance assessments); count=2,193
- For `docs`, use [`irds/antique`](https://huggingface.co/datasets/irds/antique)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/antique_train_split200-valid', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/antique_train_split200-valid', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Hashemi2020Antique,
title={ANTIQUE: A Non-Factoid Question Answering Benchmark},
author={Helia Hashemi and Mohammad Aliannejadi and Hamed Zamani and Bruce Croft},
booktitle={ECIR},
year={2020}
}
```
| irds/antique_train_split200-valid | [
"task_categories:text-retrieval",
"source_datasets:irds/antique",
"region:us"
] | 2023-01-05T02:19:27+00:00 | {"source_datasets": ["irds/antique"], "task_categories": ["text-retrieval"], "pretty_name": "`antique/train/split200-valid`", "viewer": false} | 2023-01-05T02:43:31+00:00 | [] | [] | TAGS
#task_categories-text-retrieval #source_datasets-irds/antique #region-us
|
# Dataset Card for 'antique/train/split200-valid'
The 'antique/train/split200-valid' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'queries' (i.e., topics); count=200
- 'qrels': (relevance assessments); count=2,193
- For 'docs', use 'irds/antique'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'antique/train/split200-valid'\n\nThe 'antique/train/split200-valid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=200\n - 'qrels': (relevance assessments); count=2,193\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #source_datasets-irds/antique #region-us \n",
"# Dataset Card for 'antique/train/split200-valid'\n\nThe 'antique/train/split200-valid' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=200\n - 'qrels': (relevance assessments); count=2,193\n\n - For 'docs', use 'irds/antique'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
68fc2bf4d093ac0f849236e0c32df90df2489a39 |
# Dataset Card for `aquaint`
The `aquaint` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/aquaint#aquaint).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,033,461
This dataset is used by: [`aquaint_trec-robust-2005`](https://huggingface.co/datasets/irds/aquaint_trec-robust-2005)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/aquaint', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'marked_up_doc': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@misc{Graff2002Aquaint,
title={The AQUAINT Corpus of English News Text},
author={David Graff},
year={2002},
url={https://catalog.ldc.upenn.edu/LDC2002T31},
publisher={Linguistic Data Consortium}
}
```
| irds/aquaint | [
"task_categories:text-retrieval",
"region:us"
] | 2023-01-05T02:19:38+00:00 | {"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`aquaint`", "viewer": false} | 2023-01-05T02:44:06+00:00 | [] | [] | TAGS
#task_categories-text-retrieval #region-us
|
# Dataset Card for 'aquaint'
The 'aquaint' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'docs' (documents, i.e., the corpus); count=1,033,461
This dataset is used by: 'aquaint_trec-robust-2005'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'aquaint'\n\nThe 'aquaint' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,033,461\n\n\nThis dataset is used by: 'aquaint_trec-robust-2005'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #region-us \n",
"# Dataset Card for 'aquaint'\n\nThe 'aquaint' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=1,033,461\n\n\nThis dataset is used by: 'aquaint_trec-robust-2005'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
0156f662bec09957647bddc5faf1f63170f912ab |
# Dataset Card for `aquaint/trec-robust-2005`
The `aquaint/trec-robust-2005` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/aquaint#aquaint/trec-robust-2005).
# Data
This dataset provides:
- `queries` (i.e., topics); count=50
- `qrels`: (relevance assessments); count=37,798
- For `docs`, use [`irds/aquaint`](https://huggingface.co/datasets/irds/aquaint)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/aquaint_trec-robust-2005', 'queries')
for record in queries:
record # {'query_id': ..., 'title': ..., 'description': ..., 'narrative': ...}
qrels = load_dataset('irds/aquaint_trec-robust-2005', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Voorhees2005Robust,
title={Overview of the TREC 2005 Robust Retrieval Track},
author={Ellen M. Voorhees},
booktitle={TREC},
year={2005}
}
@misc{Graff2002Aquaint,
title={The AQUAINT Corpus of English News Text},
author={David Graff},
year={2002},
url={https://catalog.ldc.upenn.edu/LDC2002T31},
publisher={Linguistic Data Consortium}
}
```
| irds/aquaint_trec-robust-2005 | [
"task_categories:text-retrieval",
"source_datasets:irds/aquaint",
"region:us"
] | 2023-01-05T02:19:49+00:00 | {"source_datasets": ["irds/aquaint"], "task_categories": ["text-retrieval"], "pretty_name": "`aquaint/trec-robust-2005`", "viewer": false} | 2023-01-05T02:44:10+00:00 | [] | [] | TAGS
#task_categories-text-retrieval #source_datasets-irds/aquaint #region-us
|
# Dataset Card for 'aquaint/trec-robust-2005'
The 'aquaint/trec-robust-2005' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'queries' (i.e., topics); count=50
- 'qrels': (relevance assessments); count=37,798
- For 'docs', use 'irds/aquaint'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'aquaint/trec-robust-2005'\n\nThe 'aquaint/trec-robust-2005' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=37,798\n\n - For 'docs', use 'irds/aquaint'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #source_datasets-irds/aquaint #region-us \n",
"# Dataset Card for 'aquaint/trec-robust-2005'\n\nThe 'aquaint/trec-robust-2005' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=50\n - 'qrels': (relevance assessments); count=37,798\n\n - For 'docs', use 'irds/aquaint'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
e8c6115186533dab575310ae5bd22e45246183a0 |
# Dataset Card for `beir/arguana`
The `beir/arguana` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/arguana).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=8,674
- `queries` (i.e., topics); count=1,406
- `qrels`: (relevance assessments); count=1,406
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_arguana', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ...}
queries = load_dataset('irds/beir_arguana', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_arguana', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Wachsmuth2018Arguana,
author = "Wachsmuth, Henning and Syed, Shahbaz and Stein, Benno",
title = "Retrieval of the Best Counterargument without Prior Topic Knowledge",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
year = "2018",
publisher = "Association for Computational Linguistics",
location = "Melbourne, Australia",
pages = "241--251",
url = "http://aclweb.org/anthology/P18-1023"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
| irds/beir_arguana | [
"task_categories:text-retrieval",
"arxiv:2104.08663",
"region:us"
] | 2023-01-05T02:20:01+00:00 | {"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/arguana`", "viewer": false} | 2023-01-05T02:44:15+00:00 | [
"2104.08663"
] | [] | TAGS
#task_categories-text-retrieval #arxiv-2104.08663 #region-us
|
# Dataset Card for 'beir/arguana'
The 'beir/arguana' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'docs' (documents, i.e., the corpus); count=8,674
- 'queries' (i.e., topics); count=1,406
- 'qrels': (relevance assessments); count=1,406
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'beir/arguana'\n\nThe 'beir/arguana' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,674\n - 'queries' (i.e., topics); count=1,406\n - 'qrels': (relevance assessments); count=1,406",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n",
"# Dataset Card for 'beir/arguana'\n\nThe 'beir/arguana' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=8,674\n - 'queries' (i.e., topics); count=1,406\n - 'qrels': (relevance assessments); count=1,406",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
1003a6b347a616074e510c56b5efb92c2a5003d8 |
# Dataset Card for `beir/climate-fever`
The `beir/climate-fever` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/climate-fever).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5,416,593
- `queries` (i.e., topics); count=1,535
- `qrels`: (relevance assessments); count=4,681
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_climate-fever', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ...}
queries = load_dataset('irds/beir_climate-fever', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_climate-fever', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Diggelmann2020CLIMATEFEVERAD,
title={CLIMATE-FEVER: A Dataset for Verification of Real-World Climate Claims},
author={T. Diggelmann and Jordan L. Boyd-Graber and Jannis Bulian and Massimiliano Ciaramita and Markus Leippold},
journal={ArXiv},
year={2020},
volume={abs/2012.00614}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
| irds/beir_climate-fever | [
"task_categories:text-retrieval",
"arxiv:2104.08663",
"region:us"
] | 2023-01-05T02:20:12+00:00 | {"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/climate-fever`", "viewer": false} | 2023-01-05T02:44:20+00:00 | [
"2104.08663"
] | [] | TAGS
#task_categories-text-retrieval #arxiv-2104.08663 #region-us
|
# Dataset Card for 'beir/climate-fever'
The 'beir/climate-fever' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'docs' (documents, i.e., the corpus); count=5,416,593
- 'queries' (i.e., topics); count=1,535
- 'qrels': (relevance assessments); count=4,681
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'beir/climate-fever'\n\nThe 'beir/climate-fever' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,416,593\n - 'queries' (i.e., topics); count=1,535\n - 'qrels': (relevance assessments); count=4,681",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n",
"# Dataset Card for 'beir/climate-fever'\n\nThe 'beir/climate-fever' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,416,593\n - 'queries' (i.e., topics); count=1,535\n - 'qrels': (relevance assessments); count=4,681",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
17727173215cb2034ee1943eea2cd6125b88f7f6 |
# Dataset Card for `beir/dbpedia-entity`
The `beir/dbpedia-entity` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/dbpedia-entity).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=4,635,922
- `queries` (i.e., topics); count=467
This dataset is used by: [`beir_dbpedia-entity_dev`](https://huggingface.co/datasets/irds/beir_dbpedia-entity_dev), [`beir_dbpedia-entity_test`](https://huggingface.co/datasets/irds/beir_dbpedia-entity_test)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_dbpedia-entity', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ..., 'url': ...}
queries = load_dataset('irds/beir_dbpedia-entity', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Hasibi2017DBpediaEntityVA,
title={DBpedia-Entity v2: A Test Collection for Entity Search},
author={Faegheh Hasibi and Fedor Nikolaev and Chenyan Xiong and K. Balog and S. E. Bratsberg and Alexander Kotov and J. Callan},
journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval},
year={2017}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
| irds/beir_dbpedia-entity | [
"task_categories:text-retrieval",
"arxiv:2104.08663",
"region:us"
] | 2023-01-05T02:20:23+00:00 | {"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/dbpedia-entity`", "viewer": false} | 2023-01-05T02:44:24+00:00 | [
"2104.08663"
] | [] | TAGS
#task_categories-text-retrieval #arxiv-2104.08663 #region-us
|
# Dataset Card for 'beir/dbpedia-entity'
The 'beir/dbpedia-entity' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'docs' (documents, i.e., the corpus); count=4,635,922
- 'queries' (i.e., topics); count=467
This dataset is used by: 'beir_dbpedia-entity_dev', 'beir_dbpedia-entity_test'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'beir/dbpedia-entity'\n\nThe 'beir/dbpedia-entity' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=4,635,922\n - 'queries' (i.e., topics); count=467\n\n\nThis dataset is used by: 'beir_dbpedia-entity_dev', 'beir_dbpedia-entity_test'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n",
"# Dataset Card for 'beir/dbpedia-entity'\n\nThe 'beir/dbpedia-entity' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=4,635,922\n - 'queries' (i.e., topics); count=467\n\n\nThis dataset is used by: 'beir_dbpedia-entity_dev', 'beir_dbpedia-entity_test'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
4f7dd1e62b688e00d665ddee9aef5935eb7d8568 |
# Dataset Card for `beir/dbpedia-entity/dev`
The `beir/dbpedia-entity/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/dbpedia-entity/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=67
- `qrels`: (relevance assessments); count=5,673
- For `docs`, use [`irds/beir_dbpedia-entity`](https://huggingface.co/datasets/irds/beir_dbpedia-entity)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_dbpedia-entity_dev', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_dbpedia-entity_dev', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Hasibi2017DBpediaEntityVA,
title={DBpedia-Entity v2: A Test Collection for Entity Search},
author={Faegheh Hasibi and Fedor Nikolaev and Chenyan Xiong and K. Balog and S. E. Bratsberg and Alexander Kotov and J. Callan},
journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval},
year={2017}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
| irds/beir_dbpedia-entity_dev | [
"task_categories:text-retrieval",
"source_datasets:irds/beir_dbpedia-entity",
"arxiv:2104.08663",
"region:us"
] | 2023-01-05T02:20:34+00:00 | {"source_datasets": ["irds/beir_dbpedia-entity"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/dbpedia-entity/dev`", "viewer": false} | 2023-01-05T02:44:29+00:00 | [
"2104.08663"
] | [] | TAGS
#task_categories-text-retrieval #source_datasets-irds/beir_dbpedia-entity #arxiv-2104.08663 #region-us
|
# Dataset Card for 'beir/dbpedia-entity/dev'
The 'beir/dbpedia-entity/dev' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'queries' (i.e., topics); count=67
- 'qrels': (relevance assessments); count=5,673
- For 'docs', use 'irds/beir_dbpedia-entity'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'beir/dbpedia-entity/dev'\n\nThe 'beir/dbpedia-entity/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=67\n - 'qrels': (relevance assessments); count=5,673\n\n - For 'docs', use 'irds/beir_dbpedia-entity'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_dbpedia-entity #arxiv-2104.08663 #region-us \n",
"# Dataset Card for 'beir/dbpedia-entity/dev'\n\nThe 'beir/dbpedia-entity/dev' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=67\n - 'qrels': (relevance assessments); count=5,673\n\n - For 'docs', use 'irds/beir_dbpedia-entity'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
46e0094ded9f08ae0454de048c60f70ddf77eb52 |
# Dataset Card for `beir/dbpedia-entity/test`
The `beir/dbpedia-entity/test` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/dbpedia-entity/test).
# Data
This dataset provides:
- `queries` (i.e., topics); count=400
- `qrels`: (relevance assessments); count=43,515
- For `docs`, use [`irds/beir_dbpedia-entity`](https://huggingface.co/datasets/irds/beir_dbpedia-entity)
## Usage
```python
from datasets import load_dataset
queries = load_dataset('irds/beir_dbpedia-entity_test', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
qrels = load_dataset('irds/beir_dbpedia-entity_test', 'qrels')
for record in qrels:
record # {'query_id': ..., 'doc_id': ..., 'relevance': ..., 'iteration': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@article{Hasibi2017DBpediaEntityVA,
title={DBpedia-Entity v2: A Test Collection for Entity Search},
author={Faegheh Hasibi and Fedor Nikolaev and Chenyan Xiong and K. Balog and S. E. Bratsberg and Alexander Kotov and J. Callan},
journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval},
year={2017}
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
| irds/beir_dbpedia-entity_test | [
"task_categories:text-retrieval",
"source_datasets:irds/beir_dbpedia-entity",
"arxiv:2104.08663",
"region:us"
] | 2023-01-05T02:44:34+00:00 | {"source_datasets": ["irds/beir_dbpedia-entity"], "task_categories": ["text-retrieval"], "pretty_name": "`beir/dbpedia-entity/test`", "viewer": false} | 2023-01-05T02:44:40+00:00 | [
"2104.08663"
] | [] | TAGS
#task_categories-text-retrieval #source_datasets-irds/beir_dbpedia-entity #arxiv-2104.08663 #region-us
|
# Dataset Card for 'beir/dbpedia-entity/test'
The 'beir/dbpedia-entity/test' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'queries' (i.e., topics); count=400
- 'qrels': (relevance assessments); count=43,515
- For 'docs', use 'irds/beir_dbpedia-entity'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'beir/dbpedia-entity/test'\n\nThe 'beir/dbpedia-entity/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=400\n - 'qrels': (relevance assessments); count=43,515\n\n - For 'docs', use 'irds/beir_dbpedia-entity'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #source_datasets-irds/beir_dbpedia-entity #arxiv-2104.08663 #region-us \n",
"# Dataset Card for 'beir/dbpedia-entity/test'\n\nThe 'beir/dbpedia-entity/test' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'queries' (i.e., topics); count=400\n - 'qrels': (relevance assessments); count=43,515\n\n - For 'docs', use 'irds/beir_dbpedia-entity'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
be5b8c519e4a654fcb4061f99585f37e4bd650e6 |
# Dataset Card for `beir/fever`
The `beir/fever` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/beir#beir/fever).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5,416,568
- `queries` (i.e., topics); count=123,142
This dataset is used by: [`beir_fever_dev`](https://huggingface.co/datasets/irds/beir_fever_dev), [`beir_fever_test`](https://huggingface.co/datasets/irds/beir_fever_test), [`beir_fever_train`](https://huggingface.co/datasets/irds/beir_fever_train)
## Usage
```python
from datasets import load_dataset
docs = load_dataset('irds/beir_fever', 'docs')
for record in docs:
record # {'doc_id': ..., 'text': ..., 'title': ...}
queries = load_dataset('irds/beir_fever', 'queries')
for record in queries:
record # {'query_id': ..., 'text': ...}
```
Note that calling `load_dataset` will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in 🤗 Dataset format.
## Citation Information
```
@inproceedings{Thorne2018Fever,
title = "{FEVER}: a Large-scale Dataset for Fact Extraction and {VER}ification",
author = "Thorne, James and
Vlachos, Andreas and
Christodoulopoulos, Christos and
Mittal, Arpit",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)",
month = jun,
year = "2018",
address = "New Orleans, Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/N18-1074",
doi = "10.18653/v1/N18-1074",
pages = "809--819"
}
@article{Thakur2021Beir,
title = "BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models",
author = "Thakur, Nandan and Reimers, Nils and Rücklé, Andreas and Srivastava, Abhishek and Gurevych, Iryna",
journal= "arXiv preprint arXiv:2104.08663",
month = "4",
year = "2021",
url = "https://arxiv.org/abs/2104.08663",
}
```
| irds/beir_fever | [
"task_categories:text-retrieval",
"arxiv:2104.08663",
"region:us"
] | 2023-01-05T02:44:45+00:00 | {"source_datasets": [], "task_categories": ["text-retrieval"], "pretty_name": "`beir/fever`", "viewer": false} | 2023-01-05T02:44:51+00:00 | [
"2104.08663"
] | [] | TAGS
#task_categories-text-retrieval #arxiv-2104.08663 #region-us
|
# Dataset Card for 'beir/fever'
The 'beir/fever' dataset, provided by the ir-datasets package.
For more information about the dataset, see the documentation.
# Data
This dataset provides:
- 'docs' (documents, i.e., the corpus); count=5,416,568
- 'queries' (i.e., topics); count=123,142
This dataset is used by: 'beir_fever_dev', 'beir_fever_test', 'beir_fever_train'
## Usage
Note that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the
data in Dataset format.
| [
"# Dataset Card for 'beir/fever'\n\nThe 'beir/fever' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,416,568\n - 'queries' (i.e., topics); count=123,142\n\n\nThis dataset is used by: 'beir_fever_dev', 'beir_fever_test', 'beir_fever_train'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] | [
"TAGS\n#task_categories-text-retrieval #arxiv-2104.08663 #region-us \n",
"# Dataset Card for 'beir/fever'\n\nThe 'beir/fever' dataset, provided by the ir-datasets package.\nFor more information about the dataset, see the documentation.",
"# Data\n\nThis dataset provides:\n - 'docs' (documents, i.e., the corpus); count=5,416,568\n - 'queries' (i.e., topics); count=123,142\n\n\nThis dataset is used by: 'beir_fever_dev', 'beir_fever_test', 'beir_fever_train'",
"## Usage\n\n\n\nNote that calling 'load_dataset' will download the dataset (or provide access instructions when it's not public) and make a copy of the\ndata in Dataset format."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.