sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
9a1d9de442ec35d7058dca8b6f4ee18d1434a1f8 | # AutoTrain Dataset for project: hannah-jpg-test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project hannah-jpg-test.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<256x256 RGB PIL image>",
"target": 0
},
{
"image": "<256x256 RGB PIL image>",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['hannah'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 7 |
| valid | 7 |
| slushily/autotrain-data-hannah-jpg-test | [
"task_categories:image-classification",
"region:us"
] | 2023-01-11T12:27:58+00:00 | {"task_categories": ["image-classification"]} | 2023-01-11T12:30:06+00:00 | [] | [] | TAGS
#task_categories-image-classification #region-us
| AutoTrain Dataset for project: hannah-jpg-test
==============================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project hannah-jpg-test.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
e17505aed638ae4195205e002f821bbecfe685cd | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | GeorgeBredis/dreambooth-hackathon-images | [
"region:us"
] | 2023-01-11T13:28:22+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3118843.0, "num_examples": 42}], "download_size": 3118955, "dataset_size": 3118843.0}} | 2023-01-11T13:28:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dreambooth-hackathon-images"
More Information needed | [
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] |
fd17def157bfb3f1dcb4a372dc0d49489c5c00e1 |
# Dataset Card for E3C
## Dataset Description
- **Homepage:** https://github.com/hltfbk/E3C-Corpus
- **PubMed** False
- **Public:** True
- **Tasks:** NER,RE
The European Clinical Case Corpus (E3C) project aims at collecting and \
annotating a large corpus of clinical documents in five European languages (Spanish, \
Basque, English, French and Italian), which will be freely distributed. Annotations \
include temporal information, to allow temporal reasoning on chronologies, and \
information about clinical entities based on medical taxonomies, to be used for semantic reasoning.
## Citation Information
```
@report{Magnini2021,
author = {Bernardo Magnini and Begoña Altuna and Alberto Lavelli and Manuela Speranza
and Roberto Zanoli and Fondazione Bruno Kessler},
keywords = {Clinical data,clinical enti-ties,corpus,multilingual,temporal information},
title = {The E3C Project:
European Clinical Case Corpus El proyecto E3C: European Clinical Case Corpus},
url = {https://uts.nlm.nih.gov/uts/umls/home},
year = {2021},
}
```
| bio-datasets/e3c | [
"region:us"
] | 2023-01-11T15:13:39+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "document_id", "dtype": "int32"}, {"name": "text", "dtype": "string"}, {"name": "passages", "list": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "offsets", "list": "int32"}]}, {"name": "entities", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "offsets", "list": "int32"}, {"name": "semantic_type_id", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "relations", "list": [{"name": "id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "contextualAspect", "dtype": "string"}, {"name": "contextualModality", "dtype": "string"}, {"name": "degree", "dtype": "string"}, {"name": "docTimeRel", "dtype": "string"}, {"name": "eventType", "dtype": "string"}, {"name": "permanence", "dtype": "string"}, {"name": "polarity", "dtype": "string"}, {"name": "functionInDocument", "dtype": "string"}, {"name": "timex3Class", "dtype": "string"}, {"name": "value", "dtype": "string"}, {"name": "concept_1", "dtype": "string"}, {"name": "concept_2", "dtype": "string"}]}], "config_name": "e3c_source", "splits": [{"name": "en.layer1", "num_bytes": 1645819, "num_examples": 84}, {"name": "en.layer2", "num_bytes": 881290, "num_examples": 171}, {"name": "en.layer2.validation", "num_bytes": 101379, "num_examples": 19}, {"name": "en.layer3", "num_bytes": 7672589, "num_examples": 9779}, {"name": "es.layer1", "num_bytes": 1398186, "num_examples": 81}, {"name": "es.layer2", "num_bytes": 907515, "num_examples": 162}, {"name": "es.layer2.validation", "num_bytes": 103936, "num_examples": 18}, {"name": "es.layer3", "num_bytes": 6656630, "num_examples": 1876}, {"name": "eu.layer1", "num_bytes": 2217479, "num_examples": 90}, {"name": "eu.layer2", "num_bytes": 306291, "num_examples": 111}, {"name": "eu.layer2.validation", "num_bytes": 95276, "num_examples": 10}, {"name": "eu.layer3", "num_bytes": 4656179, "num_examples": 1232}, {"name": "fr.layer1", "num_bytes": 1474138, "num_examples": 81}, {"name": "fr.layer2", "num_bytes": 905084, "num_examples": 168}, {"name": "fr.layer2.validation", "num_bytes": 101701, "num_examples": 18}, {"name": "fr.layer3", "num_bytes": 457927491, "num_examples": 25740}, {"name": "it.layer1", "num_bytes": 1036560, "num_examples": 86}, {"name": "it.layer2", "num_bytes": 888138, "num_examples": 174}, {"name": "it.layer2.validation", "num_bytes": 99549, "num_examples": 18}, {"name": "it.layer3", "num_bytes": 86243680, "num_examples": 10213}], "download_size": 230213492, "dataset_size": 575318910}} | 2023-08-16T07:56:50+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for E3C
## Dataset Description
- Homepage: URL
- PubMed False
- Public: True
- Tasks: NER,RE
The European Clinical Case Corpus (E3C) project aims at collecting and \
annotating a large corpus of clinical documents in five European languages (Spanish, \
Basque, English, French and Italian), which will be freely distributed. Annotations \
include temporal information, to allow temporal reasoning on chronologies, and \
information about clinical entities based on medical taxonomies, to be used for semantic reasoning.
| [
"# Dataset Card for E3C",
"## Dataset Description\n\n- Homepage: URL\n- PubMed False\n- Public: True\n- Tasks: NER,RE\n\nThe European Clinical Case Corpus (E3C) project aims at collecting and \\\nannotating a large corpus of clinical documents in five European languages (Spanish, \\\nBasque, English, French and Italian), which will be freely distributed. Annotations \\\ninclude temporal information, to allow temporal reasoning on chronologies, and \\\ninformation about clinical entities based on medical taxonomies, to be used for semantic reasoning."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for E3C",
"## Dataset Description\n\n- Homepage: URL\n- PubMed False\n- Public: True\n- Tasks: NER,RE\n\nThe European Clinical Case Corpus (E3C) project aims at collecting and \\\nannotating a large corpus of clinical documents in five European languages (Spanish, \\\nBasque, English, French and Italian), which will be freely distributed. Annotations \\\ninclude temporal information, to allow temporal reasoning on chronologies, and \\\ninformation about clinical entities based on medical taxonomies, to be used for semantic reasoning."
] |
6f7fdea52534b2bc1c83435305ca705f67aed30b | from datasets import load_dataset
dataset = load_dataset("argilla/banking_sentiment_zs_gpt3") | Mohamed-Ibrahim/Banking | [
"region:us"
] | 2023-01-11T15:22:13+00:00 | {} | 2023-01-11T15:22:53+00:00 | [] | [] | TAGS
#region-us
| from datasets import load_dataset
dataset = load_dataset("argilla/banking_sentiment_zs_gpt3") | [] | [
"TAGS\n#region-us \n"
] |
db6213b37e27be48479ec72d0909328a7c3f515b | # Dataset Card for "mystery_box"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bakhuisdennis/mystery_box | [
"region:us"
] | 2023-01-11T16:30:19+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 454701.0, "num_examples": 249}], "download_size": 253139, "dataset_size": 454701.0}} | 2023-01-11T16:30:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "mystery_box"
More Information needed | [
"# Dataset Card for \"mystery_box\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"mystery_box\"\n\nMore Information needed"
] |
d2a17fba6e401fcf957a28a7706a63e3df6e4806 | # AutoTrain Dataset for project: improved-pidgin-model
## Dataset Description
This dataset has been automatically processed by AutoTrain for project improved-pidgin-model.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"source": "My people, good evening!",
"target": "My people, good evening o!"
},
{
"source": "Uh... my name is Kabiru Sule.",
"target": "Ehm my name be Kabiru Sule."
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 8591 |
| valid | 648 |
| jamm55/freePidginDataset | [
"task_categories:translation",
"region:us"
] | 2023-01-11T17:39:46+00:00 | {"task_categories": ["translation"]} | 2023-01-11T17:42:05+00:00 | [] | [] | TAGS
#task_categories-translation #region-us
| AutoTrain Dataset for project: improved-pidgin-model
====================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project improved-pidgin-model.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-translation #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
0d5cbee5220a13b3c0701d3a165d70d006a35339 | https://github.com/Alicia-Parrish/ling_in_loop/
```bib
@inproceedings{parrish-etal-2021-putting-linguist,
title = "Does Putting a Linguist in the Loop Improve {NLU} Data Collection?",
author = "Parrish, Alicia and
Huang, William and
Agha, Omar and
Lee, Soo-Hwan and
Nangia, Nikita and
Warstadt, Alexia and
Aggarwal, Karmanya and
Allaway, Emily and
Linzen, Tal and
Bowman, Samuel R.",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.421",
doi = "10.18653/v1/2021.findings-emnlp.421",
pages = "4886--4901",
}
``` | tasksource/lingnli | [
"task_categories:text-classification",
"language:en",
"license:unknown",
"region:us"
] | 2023-01-11T20:59:56+00:00 | {"language": ["en"], "license": "unknown", "task_categories": ["text-classification"]} | 2023-05-31T07:40:53+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #license-unknown #region-us
| URL
| [] | [
"TAGS\n#task_categories-text-classification #language-English #license-unknown #region-us \n"
] |
cefdd2300894bf2329428d0262f60e2dd9e59a25 |
# Dataset Card for NeuCLIR1
## Dataset Description
- **Website:** https://neuclir.github.io/
- **Repository:** https://github.com/NeuCLIR/download-collection
### Dataset Summary
This is the dataset created for TREC 2022 NeuCLIR Track. The collection designed to be similar to HC4 and a large portion of documents from HC4 are ported to this collection.
The documents are Web pages from Common Crawl in Chinese, Persian, and Russian.
### Languages
- Chinese
- Persian
- Russian
## Dataset Structure
### Data Instances
| Split | Documents |
|-----------------|----------:|
| `fas` (Persian) | 2.2M |
| `rus` (Russian) | 4.6M |
| `zho` (Chinese) | 3.2M |
### Data Fields
- `id`: unique identifier for this document
- `cc_file`: source file from connon crawl
- `time`: extracted date/time from article
- `title`: title extracted from article
- `text`: extracted article body
- `url`: source URL
## Dataset Usage
Using 🤗 Datasets:
```python
from datasets import load_dataset
dataset = load_dataset('neuclir/neuclir1')
dataset['fas'] # Persian documents
dataset['rus'] # Russian documents
dataset['zho'] # Chinese documents
```
| neuclir/neuclir1 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:extended|c4",
"language:fa",
"language:ru",
"language:zh",
"license:odc-by",
"region:us"
] | 2023-01-11T21:08:24+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["fa", "ru", "zh"], "license": ["odc-by"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|c4"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "NeuCLIR1", "tags": []} | 2023-01-12T18:43:52+00:00 | [] | [
"fa",
"ru",
"zh"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-extended|c4 #language-Persian #language-Russian #language-Chinese #license-odc-by #region-us
| Dataset Card for NeuCLIR1
=========================
Dataset Description
-------------------
* Website: URL
* Repository: URL
### Dataset Summary
This is the dataset created for TREC 2022 NeuCLIR Track. The collection designed to be similar to HC4 and a large portion of documents from HC4 are ported to this collection.
The documents are Web pages from Common Crawl in Chinese, Persian, and Russian.
### Languages
* Chinese
* Persian
* Russian
Dataset Structure
-----------------
### Data Instances
### Data Fields
* 'id': unique identifier for this document
* 'cc\_file': source file from connon crawl
* 'time': extracted date/time from article
* 'title': title extracted from article
* 'text': extracted article body
* 'url': source URL
Dataset Usage
-------------
Using Datasets:
| [
"### Dataset Summary\n\n\nThis is the dataset created for TREC 2022 NeuCLIR Track. The collection designed to be similar to HC4 and a large portion of documents from HC4 are ported to this collection.\nThe documents are Web pages from Common Crawl in Chinese, Persian, and Russian.",
"### Languages\n\n\n* Chinese\n* Persian\n* Russian\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'id': unique identifier for this document\n* 'cc\\_file': source file from connon crawl\n* 'time': extracted date/time from article\n* 'title': title extracted from article\n* 'text': extracted article body\n* 'url': source URL\n\n\nDataset Usage\n-------------\n\n\nUsing Datasets:"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-extended|c4 #language-Persian #language-Russian #language-Chinese #license-odc-by #region-us \n",
"### Dataset Summary\n\n\nThis is the dataset created for TREC 2022 NeuCLIR Track. The collection designed to be similar to HC4 and a large portion of documents from HC4 are ported to this collection.\nThe documents are Web pages from Common Crawl in Chinese, Persian, and Russian.",
"### Languages\n\n\n* Chinese\n* Persian\n* Russian\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'id': unique identifier for this document\n* 'cc\\_file': source file from connon crawl\n* 'time': extracted date/time from article\n* 'title': title extracted from article\n* 'text': extracted article body\n* 'url': source URL\n\n\nDataset Usage\n-------------\n\n\nUsing Datasets:"
] |
6e7c86a2ae58a6ee84be59c71ef6ddf30904d95b |
# Dataset Card for HC4
## Dataset Description
- **Repository:** https://github.com/hltcoe/HC4
- **Paper:** https://arxiv.org/abs/2201.09992
### Dataset Summary
HC4 is a suite of test collections for ad hoc Cross-Language Information Retrieval (CLIR), with Common Crawl News documents in Chinese, Persian, and Russian. The documents
are Web pages from Common Crawl in Chinese, Persian, and Russian.
### Languages
- Chinese
- Persian
- Russian
## Dataset Structure
### Data Instances
| Split | Documents |
|-----------------|----------:|
| `fas` (Persian) | 486K |
| `rus` (Russian) | 4.7M |
| `zho` (Chinese) | 646K |
### Data Fields
- `id`: unique identifier for this document
- `cc_file`: source file from connon crawl
- `time`: extracted date/time from article
- `title`: title extracted from article
- `text`: extracted article body
- `url`: source URL
## Dataset Usage
Using 🤗 Datasets:
```python
from datasets import load_dataset
dataset = load_dataset('neuclir/hc4')
dataset['fas'] # Persian documents
dataset['rus'] # Russian documents
dataset['zho'] # Chinese documents
```
## Citation Information
```
@article{Lawrie2022HC4,
author = {Dawn Lawrie and James Mayfield and Douglas W. Oard and Eugene Yang},
title = {HC4: A New Suite of Test Collections for Ad Hoc CLIR},
booktitle = {{Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)},
year = {2022},
month = apr,
publisher = {Springer},
series = {Lecture Notes in Computer Science},
site = {Stavanger, Norway},
url = {https://arxiv.org/abs/2201.09992}
}
```
| neuclir/hc4 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:1M<n<10M",
"source_datasets:extended|c4",
"language:fa",
"language:ru",
"language:zh",
"license:odc-by",
"arxiv:2201.09992",
"region:us"
] | 2023-01-11T21:10:06+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["fa", "ru", "zh"], "license": ["odc-by"], "multilinguality": ["multilingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|c4"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "HC4", "tags": []} | 2023-01-17T09:38:31+00:00 | [
"2201.09992"
] | [
"fa",
"ru",
"zh"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-extended|c4 #language-Persian #language-Russian #language-Chinese #license-odc-by #arxiv-2201.09992 #region-us
| Dataset Card for HC4
====================
Dataset Description
-------------------
* Repository: URL
* Paper: URL
### Dataset Summary
HC4 is a suite of test collections for ad hoc Cross-Language Information Retrieval (CLIR), with Common Crawl News documents in Chinese, Persian, and Russian. The documents
are Web pages from Common Crawl in Chinese, Persian, and Russian.
### Languages
* Chinese
* Persian
* Russian
Dataset Structure
-----------------
### Data Instances
### Data Fields
* 'id': unique identifier for this document
* 'cc\_file': source file from connon crawl
* 'time': extracted date/time from article
* 'title': title extracted from article
* 'text': extracted article body
* 'url': source URL
Dataset Usage
-------------
Using Datasets:
| [
"### Dataset Summary\n\n\nHC4 is a suite of test collections for ad hoc Cross-Language Information Retrieval (CLIR), with Common Crawl News documents in Chinese, Persian, and Russian. The documents\nare Web pages from Common Crawl in Chinese, Persian, and Russian.",
"### Languages\n\n\n* Chinese\n* Persian\n* Russian\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'id': unique identifier for this document\n* 'cc\\_file': source file from connon crawl\n* 'time': extracted date/time from article\n* 'title': title extracted from article\n* 'text': extracted article body\n* 'url': source URL\n\n\nDataset Usage\n-------------\n\n\nUsing Datasets:"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-no-annotation #language_creators-found #multilinguality-multilingual #size_categories-1M<n<10M #source_datasets-extended|c4 #language-Persian #language-Russian #language-Chinese #license-odc-by #arxiv-2201.09992 #region-us \n",
"### Dataset Summary\n\n\nHC4 is a suite of test collections for ad hoc Cross-Language Information Retrieval (CLIR), with Common Crawl News documents in Chinese, Persian, and Russian. The documents\nare Web pages from Common Crawl in Chinese, Persian, and Russian.",
"### Languages\n\n\n* Chinese\n* Persian\n* Russian\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* 'id': unique identifier for this document\n* 'cc\\_file': source file from connon crawl\n* 'time': extracted date/time from article\n* 'title': title extracted from article\n* 'text': extracted article body\n* 'url': source URL\n\n\nDataset Usage\n-------------\n\n\nUsing Datasets:"
] |
1d97f2ef58fef5d412f78d6e43e303f1ab4da4f3 | # AutoTrain Dataset for project: yempp
## Dataset Description
This dataset has been automatically processed by AutoTrain for project yempp.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "kaputt nord stream 2 spezialschiff schaden untersuch bnn badische neueste nachrichten laut kreml-herrscher wladimir putin r\u00f6hre pipeline nord stream 2 explosionan intakt geblieben bundesregierung sieht spezialschiff betreiberfirma sache grund gehen nutzung angebot gelten agb widerrufsbelehrung information ze verarbeitung personenbezogener daten find unserer datenschutzerkl\u00e4rung \u00e4hnliche artikelkurzfristige maqnahm ze losung europ\u00e4ischen energiekrise solarify energie zukunft solarify ver\u00f6ffentlicht am1 november 20221 november 2022autorgh gerard reidangst blackout st\u00e4dte kreis g\u00fctersloh bereiten neue westf\u00e4lische stadt verl investiert 500.000 euro 72-st\u00fcndigen stromausfall grundversorg aufrechthalten knnne schlo\u00df holte-stukenbrock schafft w\u00e4rmeinseln sowie zufluchtsorte kauft feldbetten weiterlesen jahr rund h\u00e4lfte sparensusi partner geht partnerschaft sastech energate messenger schweiz bereits eina zugang einloggengasspeicher fast voll woher importierte erdgas gerade kommt stern.de deutschland hochgradig abh\u00e4ngig rund 94 prozent de hierzulande ben\u00f6tigten gas stammt importen forschungszentrum j\u00fclich faktenblatt auff\u00fchrt no kleiner teil stammt inlandsf\u00f6rderung fast verschwindend kleiner wiederum bioga mehr 53 prozent stammte gro\u00dfteil erdgasimporte j\u00fcngeren vergangenheit russland nochstgr\u00f6ceren lieferanten waran norwegen rund 38 prozent niederlande knapp neun prozent gasspeicherf\u00fcllst\u00e4nde deutschland europa bestellt zeigen beiden unten stehenden infografiken laut gesetz sollen speicher hierzulande be 1 november 90 prozent gef\u00fcllt per verordnung wurde gert ende juli 95 prozent angehoben ziel wurde \u00fcbertroffen allerdings gibt immer gro\u00dfe unterschiede einzeln anlagen hauptgasverbraucher hierzulande vergangenan jahren industrie knapp h\u00e4lfte de gas ben\u00f6tigte gefolgt haushalten mehr 30 prozent rest entf\u00e4llt fernw\u00e4rme stromversorgung stetiger gasnachschub dringend gebraucht land be laufen warm halten gerade herkommt zeigen untenstehend grafikenausblick 2023 gaspreis b\u00f6rse gefallan wirkt ewe-kunden nwzonline r\u00fcdiger klampen gashandelspl\u00e4tzen waran preise vergangen woch teil stark r\u00fcckl\u00e4ufig bedeutet schon trendwende endverbraucher wirl fragten eweenergiekrise gaspreisbremse vorgezogen strompreis 40 cent gedeckelt handelsblatt bundesregierung strompreisbremse jahresanfang angek\u00fcndigt bisher offen gelassen preisen bremse greift berlin gaspreisbremse eina monat fr\u00fcher bisher geplant greif geht beschlussvorschlag de kanzleramt ministerpr\u00e4sidentenkonferenz be mittwoch hervorenergiekrise richtig sparen b\u00f6rse anlegen sharedeal deenergiekrise richtig sparen b\u00f6rse anlegen sharedeal deneuheit fuel rettung verbrenner motorzeit de fuel pkw immer diskussion verbrennungsmotor bewahren weiterhin unwahrscheinlich verbrennungsmotor steht europa",
"question": "Wird es einen Anstieg der Energiekosten in der n\u00e4heren Zukunft geben?",
"answers.text": [
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']"
],
"answers.answer_start": [
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718
]
},
{
"context": "kaputt nord stream 2 spezialschiff schaden untersuch bnn badische neueste nachrichten laut kreml-herrscher wladimir putin r\u00f6hre pipeline nord stream 2 explosionan intakt geblieben bundesregierung sieht spezialschiff betreiberfirma sache grund gehen nutzung angebot gelten agb widerrufsbelehrung information ze verarbeitung personenbezogener daten find unserer datenschutzerkl\u00e4rung \u00e4hnliche artikelkurzfristige maqnahm ze losung europ\u00e4ischen energiekrise solarify energie zukunft solarify ver\u00f6ffentlicht am1 november 20221 november 2022autorgh gerard reidangst blackout st\u00e4dte kreis g\u00fctersloh bereiten neue westf\u00e4lische stadt verl investiert 500.000 euro 72-st\u00fcndigen stromausfall grundversorg aufrechthalten knnne schlo\u00df holte-stukenbrock schafft w\u00e4rmeinseln sowie zufluchtsorte kauft feldbetten weiterlesen jahr rund h\u00e4lfte sparensusi partner geht partnerschaft sastech energate messenger schweiz bereits eina zugang einloggengasspeicher fast voll woher importierte erdgas gerade kommt stern.de deutschland hochgradig abh\u00e4ngig rund 94 prozent de hierzulande ben\u00f6tigten gas stammt importen forschungszentrum j\u00fclich faktenblatt auff\u00fchrt no kleiner teil stammt inlandsf\u00f6rderung fast verschwindend kleiner wiederum bioga mehr 53 prozent stammte gro\u00dfteil erdgasimporte j\u00fcngeren vergangenheit russland nochstgr\u00f6ceren lieferanten waran norwegen rund 38 prozent niederlande knapp neun prozent gasspeicherf\u00fcllst\u00e4nde deutschland europa bestellt zeigen beiden unten stehenden infografiken laut gesetz sollen speicher hierzulande be 1 november 90 prozent gef\u00fcllt per verordnung wurde gert ende juli 95 prozent angehoben ziel wurde \u00fcbertroffen allerdings gibt immer gro\u00dfe unterschiede einzeln anlagen hauptgasverbraucher hierzulande vergangenan jahren industrie knapp h\u00e4lfte de gas ben\u00f6tigte gefolgt haushalten mehr 30 prozent rest entf\u00e4llt fernw\u00e4rme stromversorgung stetiger gasnachschub dringend gebraucht land be laufen warm halten gerade herkommt zeigen untenstehend grafikenausblick 2023 gaspreis b\u00f6rse gefallan wirkt ewe-kunden nwzonline r\u00fcdiger klampen gashandelspl\u00e4tzen waran preise vergangen woch teil stark r\u00fcckl\u00e4ufig bedeutet schon trendwende endverbraucher wirl fragten eweenergiekrise gaspreisbremse vorgezogen strompreis 40 cent gedeckelt handelsblatt bundesregierung strompreisbremse jahresanfang angek\u00fcndigt bisher offen gelassen preisen bremse greift berlin gaspreisbremse eina monat fr\u00fcher bisher geplant greif geht beschlussvorschlag de kanzleramt ministerpr\u00e4sidentenkonferenz be mittwoch hervorenergiekrise richtig sparen b\u00f6rse anlegen sharedeal deenergiekrise richtig sparen b\u00f6rse anlegen sharedeal deneuheit fuel rettung verbrenner motorzeit de fuel pkw immer diskussion verbrennungsmotor bewahren weiterhin unwahrscheinlich verbrennungsmotor steht europa",
"question": "Wird es einen Anstieg der Energiekosten in der n\u00e4heren Zukunft geben?",
"answers.text": [
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']",
"['unwahrscheinlich']"
],
"answers.answer_start": [
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718,
2718
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2 |
| valid | 1 |
| Prajvi/autotrain-data-yempp | [
"region:us"
] | 2023-01-11T22:00:50+00:00 | {} | 2023-01-11T22:05:44+00:00 | [] | [] | TAGS
#region-us
| AutoTrain Dataset for project: yempp
====================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project yempp.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
665744d561f00355bd4b88b52bcce98deeba99ef | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Aalaa/opt-125m-wikitext2
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@gmcather](https://huggingface.co/gmcather) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_dev-mathemakitte-c87316-2844283322 | [
"autotrain",
"evaluation",
"region:us"
] | 2023-01-12T01:07:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_dev"], "eval_info": {"task": "text_zero_shot_classification", "model": "Aalaa/opt-125m-wikitext2", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_dev", "dataset_config": "mathemakitten--winobias_antistereotype_dev", "dataset_split": "validation", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2023-01-12T01:08:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: Aalaa/opt-125m-wikitext2
* Dataset: mathemakitten/winobias_antistereotype_dev
* Config: mathemakitten--winobias_antistereotype_dev
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @gmcather for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Aalaa/opt-125m-wikitext2\n* Dataset: mathemakitten/winobias_antistereotype_dev\n* Config: mathemakitten--winobias_antistereotype_dev\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @gmcather for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: Aalaa/opt-125m-wikitext2\n* Dataset: mathemakitten/winobias_antistereotype_dev\n* Config: mathemakitten--winobias_antistereotype_dev\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @gmcather for evaluating this model."
] |
0e78936d0863db202c267520f8ce9a535df59240 | # Dataset Card for "bookcorpus_compact_1024_shard0_meta"
132 hours to finish
num_examples: 61605
size: 1.5GB
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_1024_shard0_of_10_meta | [
"region:us"
] | 2023-01-12T01:45:55+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}, {"name": "cid_arrangement", "sequence": "int32"}, {"name": "schema_lengths", "sequence": "int64"}, {"name": "topic_entity_mask", "sequence": "int64"}, {"name": "text_lengths", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 7429184891, "num_examples": 61605}], "download_size": 1631318898, "dataset_size": 7429184891}} | 2023-01-12T20:59:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bookcorpus_compact_1024_shard0_meta"
132 hours to finish
num_examples: 61605
size: 1.5GB
More Information needed | [
"# Dataset Card for \"bookcorpus_compact_1024_shard0_meta\"\n\n132 hours to finish\nnum_examples: 61605\nsize: 1.5GB\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bookcorpus_compact_1024_shard0_meta\"\n\n132 hours to finish\nnum_examples: 61605\nsize: 1.5GB\n\nMore Information needed"
] |
3fb7212c389d7818b8e6179e2cdac762f2e081d9 |
# Dataset Card for WRIME
[](https://github.com/shunk031/huggingface-datasets_wrime/actions/workflows/ci.yaml)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- Homepage: https://github.com/ids-cv/wrime
- Repository: https://github.com/shunk031/huggingface-datasets_wrime
- Paper: https://aclanthology.org/2021.naacl-main.169/
### Dataset Summary
In this study, we introduce a new dataset, WRIME, for emotional intensity estimation. We collect both the subjective emotional intensity ofthe writers themselves and the objective one annotated by the readers, and explore the differences between them. In our data collection, we hired 50 participants via crowdsourcing service. They annotated their own past posts on a social networking service (SNS) with the subjective emotional intensity. We also hired 3 annotators, who annotated allposts with the objective emotional intensity. Consequently, our Japanese emotion analysis datasetconsists of 17,000 posts with both subjective andobjective emotional intensities for Plutchik’s eightemotions ([Plutchik, 1980](https://www.sciencedirect.com/science/article/pii/B9780125587013500077)), which are given in afour-point scale (no, weak, medium, and strong).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- Japanese
## Dataset Structure
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
```python
from datasets import load_dataset
dataset = load_dataset("shunk031/wrime", name="ver1")
print(dataset)
# DatasetDict({
# train: Dataset({
# features: ['sentence', 'user_id', 'datetime', 'writer', 'reader1', 'reader2', 'reader3', 'avg_readers'],
# num_rows: 40000
# })
# validation: Dataset({
# features: ['sentence', 'user_id', 'datetime', 'writer', 'reader1', 'reader2', 'reader3', 'avg_readers'],
# num_rows: 1200
# })
# test: Dataset({
# features: ['sentence', 'user_id', 'datetime', 'writer', 'reader1', 'reader2', 'reader3', 'avg_readers'],
# num_rows: 2000
# })
# })
```
#### Ver. 1
An example of looks as follows:
```json
{
"sentence": "ぼけっとしてたらこんな時間。チャリあるから食べにでたいのに…",
"user_id": "1",
"datetime": "2012/07/31 23:48",
"writer": {
"joy": 0,
"sadness": 1,
"anticipation": 2,
"surprise": 1,
"anger": 1,
"fear": 0,
"disgust": 0,
"trust": 1
},
"reader1": {
"joy": 0,
"sadness": 2,
"anticipation": 0,
"surprise": 0,
"anger": 0,
"fear": 0,
"disgust": 0,
"trust": 0
},
"reader2": {
"joy": 0,
"sadness": 2,
"anticipation": 0,
"surprise": 1,
"anger": 0,
"fear": 0,
"disgust": 0,
"trust": 0
},
"reader3": {
"joy": 0,
"sadness": 2,
"anticipation": 0,
"surprise": 0,
"anger": 0,
"fear": 1,
"disgust": 1,
"trust": 0
},
"avg_readers": {
"joy": 0,
"sadness": 2,
"anticipation": 0,
"surprise": 0,
"anger": 0,
"fear": 0,
"disgust": 0,
"trust": 0
}
}
```
#### Ver. 1
An example of looks as follows:
```json
{
"sentence": "ぼけっとしてたらこんな時間。チャリあるから食べにでたいのに…",
"user_id": "1",
"datetime": "2012/7/31 23:48",
"writer": {
"joy": 0,
"sadness": 1,
"anticipation": 2,
"surprise": 1,
"anger": 1,
"fear": 0,
"disgust": 0,
"trust": 1,
"sentiment": 0
},
"reader1": {
"joy": 0,
"sadness": 2,
"anticipation": 0,
"surprise": 0,
"anger": 0,
"fear": 0,
"disgust": 0,
"trust": 0,
"sentiment": -2
},
"reader2": {
"joy": 0,
"sadness": 2,
"anticipation": 0,
"surprise": 0,
"anger": 0,
"fear": 1,
"disgust": 1,
"trust": 0,
"sentiment": -1
},
"reader3": {
"joy": 0,
"sadness": 2,
"anticipation": 0,
"surprise": 1,
"anger": 0,
"fear": 0,
"disgust": 0,
"trust": 0,
"sentiment": -1
},
"avg_readers": {
"joy": 0,
"sadness": 2,
"anticipation": 0,
"surprise": 0,
"anger": 0,
"fear": 0,
"disgust": 0,
"trust": 0,
"sentiment": -1
}
}
```
### Data Fields
#### Ver. 1
- `sentence`: 投稿テキスト
- `user_id`: ユーザー ID
- `datetime`: 投稿日時
- `writer`: 主観 (書き手)
- `joy`: 主観の喜びの感情
- `sadness`: 主観の悲しみの感情
- `anticipation`: 主観の期待の感情
- `surprise`: 主観の驚きの感情
- `anger`: 主観の怒りの感情
- `fear`: 主観の恐れの感情
- `disgust`: 主観の嫌悪の感情
- `trust`: 主観の信頼の感情
- `reader1`: 客観 A (読み手 A)
- `joy`: 客観 A の喜びの感情
- `sadness`: 客観 A の悲しみの感情
- `anticipation`: 客観 A の期待の感情
- `surprise`: 客観 A の驚きの感情
- `anger`: 客観 A の怒りの感情
- `fear`: 客観 A の恐れの感情
- `disgust`: 客観 A の嫌悪の感情
- `trust`: 客観 A の信頼の感情
- `reader2`: 客観 B (読み手 B)
- `joy`: 客観 B の喜びの感情
- `sadness`: 客観 B の悲しみの感情
- `anticipation`: 客観 B の期待の感情
- `surprise`: 客観 B の驚きの感情
- `anger`: 客観 B の怒りの感情
- `fear`: 客観 B の恐れの感情
- `disgust`: 客観 B の嫌悪の感情
- `trust`: 客観 B の信頼の感情
- `reader3`: 客観 C (読み手 C)
- `joy`: 客観 C の喜びの感情
- `sadness`: 客観 C の悲しみの感情
- `anticipation`: 客観 C の期待の感情
- `surprise`: 客観 C の驚きの感情
- `anger`: 客観 C の怒りの感情
- `fear`: 客観 C の恐れの感情
- `disgust`: 客観 C の嫌悪の感情
- `trust`: 客観 C の信頼の感情
- `avg_readers`
- `joy`: 客観 A, B, C 平均の喜びの感情
- `sadness`: 客観 A, B, C 平均の悲しみの感情
- `anticipation`: 客観 A, B, C 平均の期待の感情
- `surprise`: 客観 A, B, C 平均の驚きの感情
- `anger`: 客観 A, B, C 平均の怒りの感情
- `fear`: 客観 A, B, C 平均の恐れの感情
- `disgust`: 客観 A, B, C 平均の嫌悪の感情
- `trust`: 客観 A, B, C 平均の信頼の感情
#### Ver. 2
- `sentence`: 投稿テキスト
- `user_id`: ユーザー ID
- `datetime`: 投稿日時
- `writer`: 主観 (書き手)
- `joy`: 主観の喜びの感情
- `sadness`: 主観の悲しみの感情
- `anticipation`: 主観の期待の感情
- `surprise`: 主観の驚きの感情
- `anger`: 主観の怒りの感情
- `fear`: 主観の恐れの感情
- `disgust`: 主観の嫌悪の感情
- `trust`: 主観の信頼の感情
- `sentiment`: 主観の感情極性
- `reader1`: 客観 A (読み手 A)
- `joy`: 客観 A の喜びの感情
- `sadness`: 客観 A の悲しみの感情
- `anticipation`: 客観 A の期待の感情
- `surprise`: 客観 A の驚きの感情
- `anger`: 客観 A の怒りの感情
- `fear`: 客観 A の恐れの感情
- `disgust`: 客観 A の嫌悪の感情
- `trust`: 客観 A の信頼の感情
- `sentiment`: 客観 A の感情極性
- `reader2`: 客観 B (読み手 B)
- `joy`: 客観 B の喜びの感情
- `sadness`: 客観 B の悲しみの感情
- `anticipation`: 客観 B の期待の感情
- `surprise`: 客観 B の驚きの感情
- `anger`: 客観 B の怒りの感情
- `fear`: 客観 B の恐れの感情
- `disgust`: 客観 B の嫌悪の感情
- `trust`: 客観 B の信頼の感情
- `sentiment`: 客観 B の感情極性
- `reader3`: 客観 C (読み手 C)
- `joy`: 客観 C の喜びの感情
- `sadness`: 客観 C の悲しみの感情
- `anticipation`: 客観 C の期待の感情
- `surprise`: 客観 C の驚きの感情
- `anger`: 客観 C の怒りの感情
- `fear`: 客観 C の恐れの感情
- `disgust`: 客観 C の嫌悪の感情
- `trust`: 客観 C の信頼の感情
- `sentiment`: 客観 C の感情極性
- `avg_readers`
- `joy`: 客観 A, B, C 平均の喜びの感情
- `sadness`: 客観 A, B, C 平均の悲しみの感情
- `anticipation`: 客観 A, B, C 平均の期待の感情
- `surprise`: 客観 A, B, C 平均の驚きの感情
- `anger`: 客観 A, B, C 平均の怒りの感情
- `fear`: 客観 A, B, C 平均の恐れの感情
- `disgust`: 客観 A, B, C 平均の嫌悪の感情
- `trust`: 客観 A, B, C 平均の信頼の感情
- `sentiment`: 客観 A, B, C 平均の感情極性
### Data Splits
| name | train | validation | test |
|------|-------:|-----------:|------:|
| ver1 | 40,000 | 1,200 | 2,000 |
| ver2 | 30,000 | 2,500 | 2,500 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
From [the README](https://github.com/ids-cv/wrime/blob/master/README.en.md#licence) of the GitHub:
- The dataset is available for research purposes only.
- Redistribution of the dataset is prohibited.
### Citation Information
```bibtex
@inproceedings{kajiwara-etal-2021-wrime,
title = "{WRIME}: A New Dataset for Emotional Intensity Estimation with Subjective and Objective Annotations",
author = "Kajiwara, Tomoyuki and
Chu, Chenhui and
Takemura, Noriko and
Nakashima, Yuta and
Nagahara, Hajime",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.169",
doi = "10.18653/v1/2021.naacl-main.169",
pages = "2095--2104",
abstract = "We annotate 17,000 SNS posts with both the writer{'}s subjective emotional intensity and the reader{'}s objective one to construct a Japanese emotion analysis dataset. In this study, we explore the difference between the emotional intensity of the writer and that of the readers with this dataset. We found that the reader cannot fully detect the emotions of the writer, especially anger and trust. In addition, experimental results in estimating the emotional intensity show that it is more difficult to estimate the writer{'}s subjective labels than the readers{'}. The large gap between the subjective and objective emotions imply the complexity of the mapping from a post to the subjective emotion intensities, which also leads to a lower performance with machine learning models.",
}
```
```bibtex
@inproceedings{suzuki-etal-2022-japanese,
title = "A {J}apanese Dataset for Subjective and Objective Sentiment Polarity Classification in Micro Blog Domain",
author = "Suzuki, Haruya and
Miyauchi, Yuto and
Akiyama, Kazuki and
Kajiwara, Tomoyuki and
Ninomiya, Takashi and
Takemura, Noriko and
Nakashima, Yuta and
Nagahara, Hajime",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.759",
pages = "7022--7028",
abstract = "We annotate 35,000 SNS posts with both the writer{'}s subjective sentiment polarity labels and the reader{'}s objective ones to construct a Japanese sentiment analysis dataset. Our dataset includes intensity labels (\textit{none}, \textit{weak}, \textit{medium}, and \textit{strong}) for each of the eight basic emotions by Plutchik (\textit{joy}, \textit{sadness}, \textit{anticipation}, \textit{surprise}, \textit{anger}, \textit{fear}, \textit{disgust}, and \textit{trust}) as well as sentiment polarity labels (\textit{strong positive}, \textit{positive}, \textit{neutral}, \textit{negative}, and \textit{strong negative}). Previous studies on emotion analysis have studied the analysis of basic emotions and sentiment polarity independently. In other words, there are few corpora that are annotated with both basic emotions and sentiment polarity. Our dataset is the first large-scale corpus to annotate both of these emotion labels, and from both the writer{'}s and reader{'}s perspectives. In this paper, we analyze the relationship between basic emotion intensity and sentiment polarity on our dataset and report the results of benchmarking sentiment polarity classification.",
}
```
### Contributions
Thanks to [@moguranosenshi](https://github.com/moguranosenshi) for creating this dataset.
| shunk031/wrime | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"language:ja",
"license:unknown",
"sentiment-analysis",
"wrime",
"region:us"
] | 2023-01-12T03:04:20+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ja"], "license": ["unknown"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "wrime", "tags": ["sentiment-analysis", "wrime"], "datasets": ["ver1", "ver2"], "metrics": ["accuracy"]} | 2023-01-15T03:39:01+00:00 | [] | [
"ja"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #language-Japanese #license-unknown #sentiment-analysis #wrime #region-us
| Dataset Card for WRIME
======================
 with the subjective emotional intensity. We also hired 3 annotators, who annotated allposts with the objective emotional intensity. Consequently, our Japanese emotion analysis datasetconsists of 17,000 posts with both subjective andobjective emotional intensities for Plutchik’s eightemotions (Plutchik, 1980), which are given in afour-point scale (no, weak, medium, and strong).
### Supported Tasks and Leaderboards
### Languages
* Japanese
Dataset Structure
-----------------
### Data Instances
When loading a specific configuration, users has to append a version dependent suffix:
#### Ver. 1
An example of looks as follows:
#### Ver. 1
An example of looks as follows:
### Data Fields
#### Ver. 1
* 'sentence': 投稿テキスト
* 'user\_id': ユーザー ID
* 'datetime': 投稿日時
* 'writer': 主観 (書き手)
+ 'joy': 主観の喜びの感情
+ 'sadness': 主観の悲しみの感情
+ 'anticipation': 主観の期待の感情
+ 'surprise': 主観の驚きの感情
+ 'anger': 主観の怒りの感情
+ 'fear': 主観の恐れの感情
+ 'disgust': 主観の嫌悪の感情
+ 'trust': 主観の信頼の感情
* 'reader1': 客観 A (読み手 A)
+ 'joy': 客観 A の喜びの感情
+ 'sadness': 客観 A の悲しみの感情
+ 'anticipation': 客観 A の期待の感情
+ 'surprise': 客観 A の驚きの感情
+ 'anger': 客観 A の怒りの感情
+ 'fear': 客観 A の恐れの感情
+ 'disgust': 客観 A の嫌悪の感情
+ 'trust': 客観 A の信頼の感情
* 'reader2': 客観 B (読み手 B)
+ 'joy': 客観 B の喜びの感情
+ 'sadness': 客観 B の悲しみの感情
+ 'anticipation': 客観 B の期待の感情
+ 'surprise': 客観 B の驚きの感情
+ 'anger': 客観 B の怒りの感情
+ 'fear': 客観 B の恐れの感情
+ 'disgust': 客観 B の嫌悪の感情
+ 'trust': 客観 B の信頼の感情
* 'reader3': 客観 C (読み手 C)
+ 'joy': 客観 C の喜びの感情
+ 'sadness': 客観 C の悲しみの感情
+ 'anticipation': 客観 C の期待の感情
+ 'surprise': 客観 C の驚きの感情
+ 'anger': 客観 C の怒りの感情
+ 'fear': 客観 C の恐れの感情
+ 'disgust': 客観 C の嫌悪の感情
+ 'trust': 客観 C の信頼の感情
* 'avg\_readers'
+ 'joy': 客観 A, B, C 平均の喜びの感情
+ 'sadness': 客観 A, B, C 平均の悲しみの感情
+ 'anticipation': 客観 A, B, C 平均の期待の感情
+ 'surprise': 客観 A, B, C 平均の驚きの感情
+ 'anger': 客観 A, B, C 平均の怒りの感情
+ 'fear': 客観 A, B, C 平均の恐れの感情
+ 'disgust': 客観 A, B, C 平均の嫌悪の感情
+ 'trust': 客観 A, B, C 平均の信頼の感情
#### Ver. 2
* 'sentence': 投稿テキスト
* 'user\_id': ユーザー ID
* 'datetime': 投稿日時
* 'writer': 主観 (書き手)
+ 'joy': 主観の喜びの感情
+ 'sadness': 主観の悲しみの感情
+ 'anticipation': 主観の期待の感情
+ 'surprise': 主観の驚きの感情
+ 'anger': 主観の怒りの感情
+ 'fear': 主観の恐れの感情
+ 'disgust': 主観の嫌悪の感情
+ 'trust': 主観の信頼の感情
+ 'sentiment': 主観の感情極性
* 'reader1': 客観 A (読み手 A)
+ 'joy': 客観 A の喜びの感情
+ 'sadness': 客観 A の悲しみの感情
+ 'anticipation': 客観 A の期待の感情
+ 'surprise': 客観 A の驚きの感情
+ 'anger': 客観 A の怒りの感情
+ 'fear': 客観 A の恐れの感情
+ 'disgust': 客観 A の嫌悪の感情
+ 'trust': 客観 A の信頼の感情
+ 'sentiment': 客観 A の感情極性
* 'reader2': 客観 B (読み手 B)
+ 'joy': 客観 B の喜びの感情
+ 'sadness': 客観 B の悲しみの感情
+ 'anticipation': 客観 B の期待の感情
+ 'surprise': 客観 B の驚きの感情
+ 'anger': 客観 B の怒りの感情
+ 'fear': 客観 B の恐れの感情
+ 'disgust': 客観 B の嫌悪の感情
+ 'trust': 客観 B の信頼の感情
+ 'sentiment': 客観 B の感情極性
* 'reader3': 客観 C (読み手 C)
+ 'joy': 客観 C の喜びの感情
+ 'sadness': 客観 C の悲しみの感情
+ 'anticipation': 客観 C の期待の感情
+ 'surprise': 客観 C の驚きの感情
+ 'anger': 客観 C の怒りの感情
+ 'fear': 客観 C の恐れの感情
+ 'disgust': 客観 C の嫌悪の感情
+ 'trust': 客観 C の信頼の感情
+ 'sentiment': 客観 C の感情極性
* 'avg\_readers'
+ 'joy': 客観 A, B, C 平均の喜びの感情
+ 'sadness': 客観 A, B, C 平均の悲しみの感情
+ 'anticipation': 客観 A, B, C 平均の期待の感情
+ 'surprise': 客観 A, B, C 平均の驚きの感情
+ 'anger': 客観 A, B, C 平均の怒りの感情
+ 'fear': 客観 A, B, C 平均の恐れの感情
+ 'disgust': 客観 A, B, C 平均の嫌悪の感情
+ 'trust': 客観 A, B, C 平均の信頼の感情
+ 'sentiment': 客観 A, B, C 平均の感情極性
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
From the README of the GitHub:
* The dataset is available for research purposes only.
* Redistribution of the dataset is prohibited.
### Contributions
Thanks to @moguranosenshi for creating this dataset.
| [
"### Dataset Summary\n\n\nIn this study, we introduce a new dataset, WRIME, for emotional intensity estimation. We collect both the subjective emotional intensity ofthe writers themselves and the objective one annotated by the readers, and explore the differences between them. In our data collection, we hired 50 participants via crowdsourcing service. They annotated their own past posts on a social networking service (SNS) with the subjective emotional intensity. We also hired 3 annotators, who annotated allposts with the objective emotional intensity. Consequently, our Japanese emotion analysis datasetconsists of 17,000 posts with both subjective andobjective emotional intensities for Plutchik’s eightemotions (Plutchik, 1980), which are given in afour-point scale (no, weak, medium, and strong).",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\n* Japanese\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nWhen loading a specific configuration, users has to append a version dependent suffix:",
"#### Ver. 1\n\n\nAn example of looks as follows:",
"#### Ver. 1\n\n\nAn example of looks as follows:",
"### Data Fields",
"#### Ver. 1\n\n\n* 'sentence': 投稿テキスト\n* 'user\\_id': ユーザー ID\n* 'datetime': 投稿日時\n* 'writer': 主観 (書き手)\n\t+ 'joy': 主観の喜びの感情\n\t+ 'sadness': 主観の悲しみの感情\n\t+ 'anticipation': 主観の期待の感情\n\t+ 'surprise': 主観の驚きの感情\n\t+ 'anger': 主観の怒りの感情\n\t+ 'fear': 主観の恐れの感情\n\t+ 'disgust': 主観の嫌悪の感情\n\t+ 'trust': 主観の信頼の感情\n* 'reader1': 客観 A (読み手 A)\n\t+ 'joy': 客観 A の喜びの感情\n\t+ 'sadness': 客観 A の悲しみの感情\n\t+ 'anticipation': 客観 A の期待の感情\n\t+ 'surprise': 客観 A の驚きの感情\n\t+ 'anger': 客観 A の怒りの感情\n\t+ 'fear': 客観 A の恐れの感情\n\t+ 'disgust': 客観 A の嫌悪の感情\n\t+ 'trust': 客観 A の信頼の感情\n* 'reader2': 客観 B (読み手 B)\n\t+ 'joy': 客観 B の喜びの感情\n\t+ 'sadness': 客観 B の悲しみの感情\n\t+ 'anticipation': 客観 B の期待の感情\n\t+ 'surprise': 客観 B の驚きの感情\n\t+ 'anger': 客観 B の怒りの感情\n\t+ 'fear': 客観 B の恐れの感情\n\t+ 'disgust': 客観 B の嫌悪の感情\n\t+ 'trust': 客観 B の信頼の感情\n* 'reader3': 客観 C (読み手 C)\n\t+ 'joy': 客観 C の喜びの感情\n\t+ 'sadness': 客観 C の悲しみの感情\n\t+ 'anticipation': 客観 C の期待の感情\n\t+ 'surprise': 客観 C の驚きの感情\n\t+ 'anger': 客観 C の怒りの感情\n\t+ 'fear': 客観 C の恐れの感情\n\t+ 'disgust': 客観 C の嫌悪の感情\n\t+ 'trust': 客観 C の信頼の感情\n* 'avg\\_readers'\n\t+ 'joy': 客観 A, B, C 平均の喜びの感情\n\t+ 'sadness': 客観 A, B, C 平均の悲しみの感情\n\t+ 'anticipation': 客観 A, B, C 平均の期待の感情\n\t+ 'surprise': 客観 A, B, C 平均の驚きの感情\n\t+ 'anger': 客観 A, B, C 平均の怒りの感情\n\t+ 'fear': 客観 A, B, C 平均の恐れの感情\n\t+ 'disgust': 客観 A, B, C 平均の嫌悪の感情\n\t+ 'trust': 客観 A, B, C 平均の信頼の感情",
"#### Ver. 2\n\n\n* 'sentence': 投稿テキスト\n* 'user\\_id': ユーザー ID\n* 'datetime': 投稿日時\n* 'writer': 主観 (書き手)\n\t+ 'joy': 主観の喜びの感情\n\t+ 'sadness': 主観の悲しみの感情\n\t+ 'anticipation': 主観の期待の感情\n\t+ 'surprise': 主観の驚きの感情\n\t+ 'anger': 主観の怒りの感情\n\t+ 'fear': 主観の恐れの感情\n\t+ 'disgust': 主観の嫌悪の感情\n\t+ 'trust': 主観の信頼の感情\n\t+ 'sentiment': 主観の感情極性\n* 'reader1': 客観 A (読み手 A)\n\t+ 'joy': 客観 A の喜びの感情\n\t+ 'sadness': 客観 A の悲しみの感情\n\t+ 'anticipation': 客観 A の期待の感情\n\t+ 'surprise': 客観 A の驚きの感情\n\t+ 'anger': 客観 A の怒りの感情\n\t+ 'fear': 客観 A の恐れの感情\n\t+ 'disgust': 客観 A の嫌悪の感情\n\t+ 'trust': 客観 A の信頼の感情\n\t+ 'sentiment': 客観 A の感情極性\n* 'reader2': 客観 B (読み手 B)\n\t+ 'joy': 客観 B の喜びの感情\n\t+ 'sadness': 客観 B の悲しみの感情\n\t+ 'anticipation': 客観 B の期待の感情\n\t+ 'surprise': 客観 B の驚きの感情\n\t+ 'anger': 客観 B の怒りの感情\n\t+ 'fear': 客観 B の恐れの感情\n\t+ 'disgust': 客観 B の嫌悪の感情\n\t+ 'trust': 客観 B の信頼の感情\n\t+ 'sentiment': 客観 B の感情極性\n* 'reader3': 客観 C (読み手 C)\n\t+ 'joy': 客観 C の喜びの感情\n\t+ 'sadness': 客観 C の悲しみの感情\n\t+ 'anticipation': 客観 C の期待の感情\n\t+ 'surprise': 客観 C の驚きの感情\n\t+ 'anger': 客観 C の怒りの感情\n\t+ 'fear': 客観 C の恐れの感情\n\t+ 'disgust': 客観 C の嫌悪の感情\n\t+ 'trust': 客観 C の信頼の感情\n\t+ 'sentiment': 客観 C の感情極性\n* 'avg\\_readers'\n\t+ 'joy': 客観 A, B, C 平均の喜びの感情\n\t+ 'sadness': 客観 A, B, C 平均の悲しみの感情\n\t+ 'anticipation': 客観 A, B, C 平均の期待の感情\n\t+ 'surprise': 客観 A, B, C 平均の驚きの感情\n\t+ 'anger': 客観 A, B, C 平均の怒りの感情\n\t+ 'fear': 客観 A, B, C 平均の恐れの感情\n\t+ 'disgust': 客観 A, B, C 平均の嫌悪の感情\n\t+ 'trust': 客観 A, B, C 平均の信頼の感情\n\t+ 'sentiment': 客観 A, B, C 平均の感情極性",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nFrom the README of the GitHub:\n\n\n* The dataset is available for research purposes only.\n* Redistribution of the dataset is prohibited.",
"### Contributions\n\n\nThanks to @moguranosenshi for creating this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #language-Japanese #license-unknown #sentiment-analysis #wrime #region-us \n",
"### Dataset Summary\n\n\nIn this study, we introduce a new dataset, WRIME, for emotional intensity estimation. We collect both the subjective emotional intensity ofthe writers themselves and the objective one annotated by the readers, and explore the differences between them. In our data collection, we hired 50 participants via crowdsourcing service. They annotated their own past posts on a social networking service (SNS) with the subjective emotional intensity. We also hired 3 annotators, who annotated allposts with the objective emotional intensity. Consequently, our Japanese emotion analysis datasetconsists of 17,000 posts with both subjective andobjective emotional intensities for Plutchik’s eightemotions (Plutchik, 1980), which are given in afour-point scale (no, weak, medium, and strong).",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\n* Japanese\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nWhen loading a specific configuration, users has to append a version dependent suffix:",
"#### Ver. 1\n\n\nAn example of looks as follows:",
"#### Ver. 1\n\n\nAn example of looks as follows:",
"### Data Fields",
"#### Ver. 1\n\n\n* 'sentence': 投稿テキスト\n* 'user\\_id': ユーザー ID\n* 'datetime': 投稿日時\n* 'writer': 主観 (書き手)\n\t+ 'joy': 主観の喜びの感情\n\t+ 'sadness': 主観の悲しみの感情\n\t+ 'anticipation': 主観の期待の感情\n\t+ 'surprise': 主観の驚きの感情\n\t+ 'anger': 主観の怒りの感情\n\t+ 'fear': 主観の恐れの感情\n\t+ 'disgust': 主観の嫌悪の感情\n\t+ 'trust': 主観の信頼の感情\n* 'reader1': 客観 A (読み手 A)\n\t+ 'joy': 客観 A の喜びの感情\n\t+ 'sadness': 客観 A の悲しみの感情\n\t+ 'anticipation': 客観 A の期待の感情\n\t+ 'surprise': 客観 A の驚きの感情\n\t+ 'anger': 客観 A の怒りの感情\n\t+ 'fear': 客観 A の恐れの感情\n\t+ 'disgust': 客観 A の嫌悪の感情\n\t+ 'trust': 客観 A の信頼の感情\n* 'reader2': 客観 B (読み手 B)\n\t+ 'joy': 客観 B の喜びの感情\n\t+ 'sadness': 客観 B の悲しみの感情\n\t+ 'anticipation': 客観 B の期待の感情\n\t+ 'surprise': 客観 B の驚きの感情\n\t+ 'anger': 客観 B の怒りの感情\n\t+ 'fear': 客観 B の恐れの感情\n\t+ 'disgust': 客観 B の嫌悪の感情\n\t+ 'trust': 客観 B の信頼の感情\n* 'reader3': 客観 C (読み手 C)\n\t+ 'joy': 客観 C の喜びの感情\n\t+ 'sadness': 客観 C の悲しみの感情\n\t+ 'anticipation': 客観 C の期待の感情\n\t+ 'surprise': 客観 C の驚きの感情\n\t+ 'anger': 客観 C の怒りの感情\n\t+ 'fear': 客観 C の恐れの感情\n\t+ 'disgust': 客観 C の嫌悪の感情\n\t+ 'trust': 客観 C の信頼の感情\n* 'avg\\_readers'\n\t+ 'joy': 客観 A, B, C 平均の喜びの感情\n\t+ 'sadness': 客観 A, B, C 平均の悲しみの感情\n\t+ 'anticipation': 客観 A, B, C 平均の期待の感情\n\t+ 'surprise': 客観 A, B, C 平均の驚きの感情\n\t+ 'anger': 客観 A, B, C 平均の怒りの感情\n\t+ 'fear': 客観 A, B, C 平均の恐れの感情\n\t+ 'disgust': 客観 A, B, C 平均の嫌悪の感情\n\t+ 'trust': 客観 A, B, C 平均の信頼の感情",
"#### Ver. 2\n\n\n* 'sentence': 投稿テキスト\n* 'user\\_id': ユーザー ID\n* 'datetime': 投稿日時\n* 'writer': 主観 (書き手)\n\t+ 'joy': 主観の喜びの感情\n\t+ 'sadness': 主観の悲しみの感情\n\t+ 'anticipation': 主観の期待の感情\n\t+ 'surprise': 主観の驚きの感情\n\t+ 'anger': 主観の怒りの感情\n\t+ 'fear': 主観の恐れの感情\n\t+ 'disgust': 主観の嫌悪の感情\n\t+ 'trust': 主観の信頼の感情\n\t+ 'sentiment': 主観の感情極性\n* 'reader1': 客観 A (読み手 A)\n\t+ 'joy': 客観 A の喜びの感情\n\t+ 'sadness': 客観 A の悲しみの感情\n\t+ 'anticipation': 客観 A の期待の感情\n\t+ 'surprise': 客観 A の驚きの感情\n\t+ 'anger': 客観 A の怒りの感情\n\t+ 'fear': 客観 A の恐れの感情\n\t+ 'disgust': 客観 A の嫌悪の感情\n\t+ 'trust': 客観 A の信頼の感情\n\t+ 'sentiment': 客観 A の感情極性\n* 'reader2': 客観 B (読み手 B)\n\t+ 'joy': 客観 B の喜びの感情\n\t+ 'sadness': 客観 B の悲しみの感情\n\t+ 'anticipation': 客観 B の期待の感情\n\t+ 'surprise': 客観 B の驚きの感情\n\t+ 'anger': 客観 B の怒りの感情\n\t+ 'fear': 客観 B の恐れの感情\n\t+ 'disgust': 客観 B の嫌悪の感情\n\t+ 'trust': 客観 B の信頼の感情\n\t+ 'sentiment': 客観 B の感情極性\n* 'reader3': 客観 C (読み手 C)\n\t+ 'joy': 客観 C の喜びの感情\n\t+ 'sadness': 客観 C の悲しみの感情\n\t+ 'anticipation': 客観 C の期待の感情\n\t+ 'surprise': 客観 C の驚きの感情\n\t+ 'anger': 客観 C の怒りの感情\n\t+ 'fear': 客観 C の恐れの感情\n\t+ 'disgust': 客観 C の嫌悪の感情\n\t+ 'trust': 客観 C の信頼の感情\n\t+ 'sentiment': 客観 C の感情極性\n* 'avg\\_readers'\n\t+ 'joy': 客観 A, B, C 平均の喜びの感情\n\t+ 'sadness': 客観 A, B, C 平均の悲しみの感情\n\t+ 'anticipation': 客観 A, B, C 平均の期待の感情\n\t+ 'surprise': 客観 A, B, C 平均の驚きの感情\n\t+ 'anger': 客観 A, B, C 平均の怒りの感情\n\t+ 'fear': 客観 A, B, C 平均の恐れの感情\n\t+ 'disgust': 客観 A, B, C 平均の嫌悪の感情\n\t+ 'trust': 客観 A, B, C 平均の信頼の感情\n\t+ 'sentiment': 客観 A, B, C 平均の感情極性",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information\n\n\nFrom the README of the GitHub:\n\n\n* The dataset is available for research purposes only.\n* Redistribution of the dataset is prohibited.",
"### Contributions\n\n\nThanks to @moguranosenshi for creating this dataset."
] |
afbd39edb1a160731fb6449bf3e2fb27b26f537b | # Dataset Card for "bookcorpus_compact_1024_shard8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_1024_shard8_of_10 | [
"region:us"
] | 2023-01-12T03:24:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 755424246, "num_examples": 61605}], "download_size": 380882733, "dataset_size": 755424246}} | 2023-01-12T03:27:31+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bookcorpus_compact_1024_shard8"
More Information needed | [
"# Dataset Card for \"bookcorpus_compact_1024_shard8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bookcorpus_compact_1024_shard8\"\n\nMore Information needed"
] |
3c19b0488d794d30c36f73d132d8a22e64f42f2e | # Dataset Card for LogiQA
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the English versions only; the Chinese versions are available via the homepage/original source.
## Dataset Structure
### Data Instances
An example from `train` looks as follows:
```
{'context': 'Continuous exposure to indoor fluorescent lights is beneficial to the health of hamsters with heart disease. One group of hamsters exposed to continuous exposure to fluorescent lights has an average lifespan that is 2.5% longer than another one of the same species but living in a black wall.',
'query': 'Which of the following questions was the initial motivation for conducting the above experiment?',
'options': ['Can hospital light therapy be proved to promote patient recovery?',
'Which one lives longer, the hamster living under the light or the hamster living in the dark?',
'What kind of illness does the hamster have?',
'Do some hamsters need a period of darkness?'],
'correct_option': 0}
```
### Data Fields
- `context`: a `string` feature.
- `query`: a `string` feature.
- `answers`: a `list` feature containing `string` features.
- `correct_option`: a `string` feature.
### Data Splits
|train|validation|test|
|----:|---------:|---:|
| 7376| 651| 651|
## Additional Information
### Dataset Curators
The original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.
### Licensing Information
[More Information Needed]
### Citation Information
```
@article{liu2020logiqa,
title={Logiqa: A challenge dataset for machine reading comprehension with logical reasoning},
author={Liu, Jian and Cui, Leyang and Liu, Hanmeng and Huang, Dandan and Wang, Yile and Zhang, Yue},
journal={arXiv preprint arXiv:2007.08124},
year={2020}
}
```
### Contributions
[@lucasmccabe](https://github.com/lucasmccabe) added this dataset. | lucasmccabe/logiqa | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2023-01-12T04:14:53+00:00 | {"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "paperswithcode_id": "logiqa", "pretty_name": "LogiQA", "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "options", "sequence": {"dtype": "string"}}, {"name": "correct_option", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 7376}, {"name": "validation", "num_examples": 651}, {"name": "test", "num_examples": 651}]}} | 2023-02-08T01:51:31+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-English #region-us
| Dataset Card for LogiQA
=======================
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
LogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the English versions only; the Chinese versions are available via the homepage/original source.
Dataset Structure
-----------------
### Data Instances
An example from 'train' looks as follows:
### Data Fields
* 'context': a 'string' feature.
* 'query': a 'string' feature.
* 'answers': a 'list' feature containing 'string' features.
* 'correct\_option': a 'string' feature.
### Data Splits
Additional Information
----------------------
### Dataset Curators
The original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.
### Licensing Information
### Contributions
@lucasmccabe added this dataset.
| [
"### Dataset Summary\n\n\nLogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the English versions only; the Chinese versions are available via the homepage/original source.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example from 'train' looks as follows:",
"### Data Fields\n\n\n* 'context': a 'string' feature.\n* 'query': a 'string' feature.\n* 'answers': a 'list' feature containing 'string' features.\n* 'correct\\_option': a 'string' feature.",
"### Data Splits\n\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.",
"### Licensing Information",
"### Contributions\n\n\n@lucasmccabe added this dataset."
] | [
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #region-us \n",
"### Dataset Summary\n\n\nLogiQA is constructed from the logical comprehension problems from publically available questions of the National Civil Servants Examination of China, which are designed to test the civil servant candidates’ critical thinking and problem solving. This dataset includes the English versions only; the Chinese versions are available via the homepage/original source.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example from 'train' looks as follows:",
"### Data Fields\n\n\n* 'context': a 'string' feature.\n* 'query': a 'string' feature.\n* 'answers': a 'list' feature containing 'string' features.\n* 'correct\\_option': a 'string' feature.",
"### Data Splits\n\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe original LogiQA was produced by Jian Liu, Leyang Cui , Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang.",
"### Licensing Information",
"### Contributions\n\n\n@lucasmccabe added this dataset."
] |
e7bab7c323d5794913b6a4ec1f75ebac70a4d3c6 | # Dataset Card for "bookcorpus_compact_1024_shard2_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_1024_shard2_of_10_meta | [
"region:us"
] | 2023-01-12T04:44:35+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}, {"name": "cid_arrangement", "sequence": "int32"}, {"name": "schema_lengths", "sequence": "int64"}, {"name": "topic_entity_mask", "sequence": "int64"}, {"name": "text_lengths", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 7742678868, "num_examples": 61605}], "download_size": 1715122126, "dataset_size": 7742678868}} | 2023-01-12T04:49:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bookcorpus_compact_1024_shard2_meta"
More Information needed | [
"# Dataset Card for \"bookcorpus_compact_1024_shard2_meta\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bookcorpus_compact_1024_shard2_meta\"\n\nMore Information needed"
] |
c2bf86c6d0c1331a6aa950b61b2520dcface8532 |
# Dataset Card for aeroBERT-classification
## Dataset Description
- **Paper:** aeroBERT-Classifier: Classification of Aerospace Requirements using BERT
- **Point of Contact:** [email protected]
### Dataset Summary
This dataset contains requirements from the aerospace domain. The requirements are tagged based on the "type"/category of requirement they belong to.
The creation of this dataset is aimed at - <br>
(1) Making available an **open-source** dataset for aerospace requirements which are often proprietary <br>
(2) Fine-tuning language models for **requirements classification** specific to the aerospace domain <br>
This dataset can be used for training or fine-tuning language models for the identification of the following types of requirements - <br>
<br>
**Design Requirement** - Dictates "how" a system should be designed given certain technical standards and specifications;
**Example:** Trim control systems must be designed to prevent creeping in flight.<br>
<br>
**Functional Requirement** - Defines the functions that need to be performed by a system in order to accomplish the desired system functionality;
**Example:** Each cockpit voice recorder shall record the voice communications of flight crew members on the flight deck.<br>
<br>
**Performance Requirement** - Defines "how well" a system needs to perform a certain function;
**Example:** The airplane must be free from flutter, control reversal, and divergence for any configuration and condition of operation.<br>
## Dataset Structure
The tagging scheme followed: <br>
(1) Design requirements: 0 (Count = 149) <br>
(2) Functional requirements: 1 (Count = 99) <br>
(3) Performance requirements: 2 (Count = 62) <br>
<br>
The dataset is of the format: ``requirements | label`` <br>
| requirements | label |
| :----: | :----: |
| Each cockpit voice recorder shall record voice communications transmitted from or received in the airplane by radio.| 1 |
| Each recorder container must be either bright orange or bright yellow.| 0 |
| Single-engine airplanes, not certified for aerobatics, must not have a tendency to inadvertently depart controlled flight. | 2|
| Each part of the airplane must have adequate provisions for ventilation and drainage. | 0 |
| Each baggage and cargo compartment must have a means to prevent the contents of the compartment from becoming a hazard by impacting occupants or shifting. | 1 |
## Dataset Creation
### Source Data
A total of 325 aerospace requirements were collected from Parts 23 and 25 of Title 14 of the Code of Federal Regulations (CFRs) and annotated (refer to the paper for more details). <br>
### Importing dataset into Python environment
Use the following code chunk to import the dataset into Python environment as a DataFrame.
```
from datasets import load_dataset
import pandas as pd
dataset = load_dataset("archanatikayatray/aeroBERT-classification")
#Converting the dataset into a pandas DataFrame
dataset = pd.DataFrame(dataset["train"]["text"])
dataset = dataset[0].str.split('*', expand = True)
#Getting the headers from the first row
header = dataset.iloc[0]
#Excluding the first row since it contains the headers
dataset = dataset[1:]
#Assigning the header to the DataFrame
dataset.columns = header
#Viewing the last 10 rows of the annotated dataset
dataset.tail(10)
```
### Annotations
#### Annotation process
A Subject Matter Expert (SME) was consulted for deciding on the annotation categories for the requirements.
The final classification dataset had 149 Design requirements, 99 Functional requirements, and 62 Performance requirements.
Lastly, the 'labels' attached to the requirements (design requirement, functional requirement, and performance requirement) were converted into numeric values: 0, 1, and 2 respectively.
### Limitations
(1)The dataset is an imbalanced dataset (more Design requirements as compared to the other types). Hence, using ``Accuracy`` as a metric for the model performance is
NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.
(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.
Please refer to the Appendix of the paper for information on the test set.
### Citation Information
```
@Article{aeroBERT-Classifier,
AUTHOR = {Tikayat Ray, Archana and Cole, Bjorn F. and Pinon Fischer, Olivia J. and White, Ryan T. and Mavris, Dimitri N.},
TITLE = {aeroBERT-Classifier: Classification of Aerospace Requirements Using BERT},
JOURNAL = {Aerospace},
VOLUME = {10},
YEAR = {2023},
NUMBER = {3},
ARTICLE-NUMBER = {279},
URL = {https://www.mdpi.com/2226-4310/10/3/279},
ISSN = {2226-4310},
DOI = {10.3390/aerospace10030279}
}
@phdthesis{tikayatray_thesis,
author = {Tikayat Ray, Archana},
title = {Standardization of Engineering Requirements Using Large Language Models},
school = {Georgia Institute of Technology},
year = {2023},
doi = {10.13140/RG.2.2.17792.40961},
URL = {https://repository.gatech.edu/items/964c73e3-f0a8-487d-a3fa-a0988c840d04}
}
``` | archanatikayatray/aeroBERT-classification | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"sentence classification",
"aerospace requirements",
"design",
"functional",
"performance",
"requirements",
"NLP4RE",
"doi:10.57967/hf/0433",
"region:us"
] | 2023-01-12T05:00:31+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "pretty_name": "requirements_classification_dataset.txt", "tags": ["sentence classification", "aerospace requirements", "design", "functional", "performance", "requirements", "NLP4RE"]} | 2023-05-20T21:40:37+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #size_categories-n<1K #language-English #license-apache-2.0 #sentence classification #aerospace requirements #design #functional #performance #requirements #NLP4RE #doi-10.57967/hf/0433 #region-us
| Dataset Card for aeroBERT-classification
========================================
Dataset Description
-------------------
* Paper: aeroBERT-Classifier: Classification of Aerospace Requirements using BERT
* Point of Contact: archanatikayatray@URL
### Dataset Summary
This dataset contains requirements from the aerospace domain. The requirements are tagged based on the "type"/category of requirement they belong to.
The creation of this dataset is aimed at -
(1) Making available an open-source dataset for aerospace requirements which are often proprietary
(2) Fine-tuning language models for requirements classification specific to the aerospace domain
This dataset can be used for training or fine-tuning language models for the identification of the following types of requirements -
Design Requirement - Dictates "how" a system should be designed given certain technical standards and specifications;
Example: Trim control systems must be designed to prevent creeping in flight.
Functional Requirement - Defines the functions that need to be performed by a system in order to accomplish the desired system functionality;
Example: Each cockpit voice recorder shall record the voice communications of flight crew members on the flight deck.
Performance Requirement - Defines "how well" a system needs to perform a certain function;
Example: The airplane must be free from flutter, control reversal, and divergence for any configuration and condition of operation.
Dataset Structure
-----------------
The tagging scheme followed:
(1) Design requirements: 0 (Count = 149)
(2) Functional requirements: 1 (Count = 99)
(3) Performance requirements: 2 (Count = 62)
The dataset is of the format: ''requirements | label''
Dataset Creation
----------------
### Source Data
A total of 325 aerospace requirements were collected from Parts 23 and 25 of Title 14 of the Code of Federal Regulations (CFRs) and annotated (refer to the paper for more details).
### Importing dataset into Python environment
Use the following code chunk to import the dataset into Python environment as a DataFrame.
### Annotations
#### Annotation process
A Subject Matter Expert (SME) was consulted for deciding on the annotation categories for the requirements.
The final classification dataset had 149 Design requirements, 99 Functional requirements, and 62 Performance requirements.
Lastly, the 'labels' attached to the requirements (design requirement, functional requirement, and performance requirement) were converted into numeric values: 0, 1, and 2 respectively.
### Limitations
(1)The dataset is an imbalanced dataset (more Design requirements as compared to the other types). Hence, using ''Accuracy'' as a metric for the model performance is
NOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.
(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.
Please refer to the Appendix of the paper for information on the test set.
| [
"### Dataset Summary\n\n\nThis dataset contains requirements from the aerospace domain. The requirements are tagged based on the \"type\"/category of requirement they belong to.\nThe creation of this dataset is aimed at - \n\n(1) Making available an open-source dataset for aerospace requirements which are often proprietary \n\n(2) Fine-tuning language models for requirements classification specific to the aerospace domain \n\n\n\nThis dataset can be used for training or fine-tuning language models for the identification of the following types of requirements - \n\n \n\nDesign Requirement - Dictates \"how\" a system should be designed given certain technical standards and specifications;\nExample: Trim control systems must be designed to prevent creeping in flight. \n\n \n\nFunctional Requirement - Defines the functions that need to be performed by a system in order to accomplish the desired system functionality;\nExample: Each cockpit voice recorder shall record the voice communications of flight crew members on the flight deck. \n\n \n\nPerformance Requirement - Defines \"how well\" a system needs to perform a certain function;\nExample: The airplane must be free from flutter, control reversal, and divergence for any configuration and condition of operation. \n\n\n\nDataset Structure\n-----------------\n\n\nThe tagging scheme followed: \n\n(1) Design requirements: 0 (Count = 149) \n\n(2) Functional requirements: 1 (Count = 99) \n\n(3) Performance requirements: 2 (Count = 62) \n\n \n\n\n\nThe dataset is of the format: ''requirements | label'' \n\n\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nA total of 325 aerospace requirements were collected from Parts 23 and 25 of Title 14 of the Code of Federal Regulations (CFRs) and annotated (refer to the paper for more details).",
"### Importing dataset into Python environment\n\n\nUse the following code chunk to import the dataset into Python environment as a DataFrame.",
"### Annotations",
"#### Annotation process\n\n\nA Subject Matter Expert (SME) was consulted for deciding on the annotation categories for the requirements.\n\n\nThe final classification dataset had 149 Design requirements, 99 Functional requirements, and 62 Performance requirements.\nLastly, the 'labels' attached to the requirements (design requirement, functional requirement, and performance requirement) were converted into numeric values: 0, 1, and 2 respectively.",
"### Limitations\n\n\n(1)The dataset is an imbalanced dataset (more Design requirements as compared to the other types). Hence, using ''Accuracy'' as a metric for the model performance is\nNOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.\n\n\n(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.\nPlease refer to the Appendix of the paper for information on the test set."
] | [
"TAGS\n#task_categories-text-classification #size_categories-n<1K #language-English #license-apache-2.0 #sentence classification #aerospace requirements #design #functional #performance #requirements #NLP4RE #doi-10.57967/hf/0433 #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains requirements from the aerospace domain. The requirements are tagged based on the \"type\"/category of requirement they belong to.\nThe creation of this dataset is aimed at - \n\n(1) Making available an open-source dataset for aerospace requirements which are often proprietary \n\n(2) Fine-tuning language models for requirements classification specific to the aerospace domain \n\n\n\nThis dataset can be used for training or fine-tuning language models for the identification of the following types of requirements - \n\n \n\nDesign Requirement - Dictates \"how\" a system should be designed given certain technical standards and specifications;\nExample: Trim control systems must be designed to prevent creeping in flight. \n\n \n\nFunctional Requirement - Defines the functions that need to be performed by a system in order to accomplish the desired system functionality;\nExample: Each cockpit voice recorder shall record the voice communications of flight crew members on the flight deck. \n\n \n\nPerformance Requirement - Defines \"how well\" a system needs to perform a certain function;\nExample: The airplane must be free from flutter, control reversal, and divergence for any configuration and condition of operation. \n\n\n\nDataset Structure\n-----------------\n\n\nThe tagging scheme followed: \n\n(1) Design requirements: 0 (Count = 149) \n\n(2) Functional requirements: 1 (Count = 99) \n\n(3) Performance requirements: 2 (Count = 62) \n\n \n\n\n\nThe dataset is of the format: ''requirements | label'' \n\n\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nA total of 325 aerospace requirements were collected from Parts 23 and 25 of Title 14 of the Code of Federal Regulations (CFRs) and annotated (refer to the paper for more details).",
"### Importing dataset into Python environment\n\n\nUse the following code chunk to import the dataset into Python environment as a DataFrame.",
"### Annotations",
"#### Annotation process\n\n\nA Subject Matter Expert (SME) was consulted for deciding on the annotation categories for the requirements.\n\n\nThe final classification dataset had 149 Design requirements, 99 Functional requirements, and 62 Performance requirements.\nLastly, the 'labels' attached to the requirements (design requirement, functional requirement, and performance requirement) were converted into numeric values: 0, 1, and 2 respectively.",
"### Limitations\n\n\n(1)The dataset is an imbalanced dataset (more Design requirements as compared to the other types). Hence, using ''Accuracy'' as a metric for the model performance is\nNOT a good idea. The use of Precision, Recall, and F1 scores are suggested for model performance evaluation.\n\n\n(2)This dataset does not contain a test set. Hence, it is suggested that the user split the dataset into training/validation/testing after importing the data into a Python environment.\nPlease refer to the Appendix of the paper for information on the test set."
] |
b718f8840ee0773ef9b96369007b38653085719b |
# Dataset Card for "Emoji_Dataset-Openmoji"
All data is 618*618 size *.png + text(4083 couple).
All emojis designed by OpenMoji(https://openmoji.org/) - the open-source emoji and icon project.
License: CC BY-SA 4.0
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | soypablo/Emoji_Dataset-Openmoji | [
"size_categories:1K<n<10K",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-01-12T06:28:53+00:00 | {"language": ["en"], "license": "cc-by-sa-4.0", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 85090151.546, "num_examples": 4083}], "download_size": 101470798, "dataset_size": 85090151.546}} | 2023-01-25T10:04:04+00:00 | [] | [
"en"
] | TAGS
#size_categories-1K<n<10K #language-English #license-cc-by-sa-4.0 #region-us
|
# Dataset Card for "Emoji_Dataset-Openmoji"
All data is 618*618 size *.png + text(4083 couple).
All emojis designed by OpenMoji(URL - the open-source emoji and icon project.
License: CC BY-SA 4.0
More Information needed | [
"# Dataset Card for \"Emoji_Dataset-Openmoji\"\n\nAll data is 618*618 size *.png + text(4083 couple).\n\nAll emojis designed by OpenMoji(URL - the open-source emoji and icon project. \n\nLicense: CC BY-SA 4.0\nMore Information needed"
] | [
"TAGS\n#size_categories-1K<n<10K #language-English #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for \"Emoji_Dataset-Openmoji\"\n\nAll data is 618*618 size *.png + text(4083 couple).\n\nAll emojis designed by OpenMoji(URL - the open-source emoji and icon project. \n\nLicense: CC BY-SA 4.0\nMore Information needed"
] |
91ebca14baafcf9dd528c2a9b444a61663b754dc |
A database of Wikipedia pages summarizes certain Natural Launage Processing Model applications. | SinonTM/Wiki-Scraper | [
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:en",
"license:openrail",
"region:us"
] | 2023-01-12T06:34:35+00:00 | {"language": ["en"], "license": "openrail", "size_categories": ["10K<n<100K"], "task_categories": ["summarization"], "pretty_name": "Wiki Scraper"} | 2023-01-12T22:51:50+00:00 | [] | [
"en"
] | TAGS
#task_categories-summarization #size_categories-10K<n<100K #language-English #license-openrail #region-us
|
A database of Wikipedia pages summarizes certain Natural Launage Processing Model applications. | [] | [
"TAGS\n#task_categories-summarization #size_categories-10K<n<100K #language-English #license-openrail #region-us \n"
] |
7fdf6211323d9578965652e717d3250883a15e30 |
# Dataset Card for "NER Model Tune"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/ayuhamaro/nlp-model-tune
- **Paper:** [More Information Needed]
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions | ayuhamaro/ner-model-tune | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:unknown",
"region:us"
] | 2023-01-12T06:35:26+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["zh"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "nlp-model-tune", "pretty_name": "NER Model Tune", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O,", "1": "B-CARDINAL,", "2": "B-DATE,", "3": "B-EVENT,", "4": "B-FAC,", "5": "B-GPE,", "6": "B-LANGUAGE,", "7": "B-LAW,", "8": "B-LOC,", "9": "B-MONEY,", "10": "B-NORP,", "11": "B-ORDINAL,", "12": "B-ORG,", "13": "B-PERCENT,", "14": "B-PERSON,", "15": "B-PRODUCT,", "16": "B-QUANTITY,", "17": "B-TIME,", "18": "B-WORK_OF_ART,", "19": "I-CARDINAL,", "20": "I-DATE,", "21": "I-EVENT,", "22": "I-FAC,", "23": "I-GPE,", "24": "I-LANGUAGE,", "25": "I-LAW,", "26": "I-LOC,", "27": "I-MONEY,", "28": "I-NORP,", "29": "I-ORDINAL,", "30": "I-ORG,", "31": "I-PERCENT,", "32": "I-PERSON,", "33": "I-PRODUCT,", "34": "I-QUANTITY,", "35": "I-TIME,", "36": "I-WORK_OF_ART,", "37": "E-CARDINAL,", "38": "E-DATE,", "39": "E-EVENT,", "40": "E-FAC,", "41": "E-GPE,", "42": "E-LANGUAGE,", "43": "E-LAW,", "44": "E-LOC,", "45": "E-MONEY,", "46": "E-NORP,", "47": "E-ORDINAL,", "48": "E-ORG,", "49": "E-PERCENT,", "50": "E-PERSON,", "51": "E-PRODUCT,", "52": "E-QUANTITY,", "53": "E-TIME,", "54": "E-WORK_OF_ART,", "55": "S-CARDINAL,", "56": "S-DATE,", "57": "S-EVENT,", "58": "S-FAC,", "59": "S-GPE,", "60": "S-LANGUAGE,", "61": "S-LAW,", "62": "S-LOC,", "63": "S-MONEY,", "64": "S-NORP,", "65": "S-ORDINAL,", "66": "S-ORG,", "67": "S-PERCENT,", "68": "S-PERSON,", "69": "S-PRODUCT,", "70": "S-QUANTITY,", "71": "S-TIME,", "72": "S-WORK_OF_ART"}}}}], "splits": [{"name": "train", "num_bytes": 568, "num_examples": 1}], "download_size": 568, "dataset_size": 568}, "train-eval-index": [{"config": "default", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}, "metrics": [{"type": "seqeval", "name": "seqeval"}]}]} | 2023-01-13T07:53:28+00:00 | [] | [
"zh"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-unknown #region-us
|
# Dataset Card for "NER Model Tune"
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: None
- Repository: URL
- Paper:
- Leaderboard: [If the dataset supports an active leaderboard, add link here]()
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions | [
"# Dataset Card for \"NER Model Tune\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: \n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-unknown #region-us \n",
"# Dataset Card for \"NER Model Tune\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: \n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
8214795c511cdc0da792e11058c3bb23fdba8687 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | akshaypt7/dreambooth-hackathon-images | [
"region:us"
] | 2023-01-12T07:00:10+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 936008.0, "num_examples": 30}], "download_size": 0, "dataset_size": 936008.0}} | 2023-01-12T15:41:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dreambooth-hackathon-images"
More Information needed | [
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] |
2728cd590cbac25cf7231203035c88b2f5e8b5ff | # Dataset Card for "free_marco"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bhavnicksm/free_marco | [
"region:us"
] | 2023-01-12T08:07:55+00:00 | {"dataset_info": {"features": [{"name": "query", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "validation", "num_bytes": 25790184.920231506, "num_examples": 55578}, {"name": "train", "num_bytes": 238011027.40998867, "num_examples": 502939}, {"name": "test"}], "download_size": 175593615, "dataset_size": 263801212.33022016}} | 2023-01-16T08:38:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "free_marco"
More Information needed | [
"# Dataset Card for \"free_marco\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"free_marco\"\n\nMore Information needed"
] |
d47c1f0110dc50b716f3c61e2110dbfcd73b1788 |
<h1>This dataset is used to train AI how to use python.</h1> | derchr/py | [
"license:bigscience-openrail-m",
"region:us"
] | 2023-01-12T09:15:53+00:00 | {"license": "bigscience-openrail-m"} | 2023-01-12T09:21:37+00:00 | [] | [] | TAGS
#license-bigscience-openrail-m #region-us
|
<h1>This dataset is used to train AI how to use python.</h1> | [] | [
"TAGS\n#license-bigscience-openrail-m #region-us \n"
] |
2f06f7ce1fdca18a088e689c64a3eac5c3788a78 | # Dataset Card for "analysed-diff-metadata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mamiksik/analysed-diff-metadata | [
"region:us"
] | 2023-01-12T09:58:34+00:00 | {"dataset_info": {"features": [{"name": "sha", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "committer", "dtype": "string"}, {"name": "message", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "subject_length", "dtype": "float64"}, {"name": "is_chore", "dtype": "bool"}, {"name": "is_bot", "dtype": "bool"}, {"name": "subject_word_count", "dtype": "float64"}, {"name": "verb_object_spacy", "dtype": "bool"}, {"name": "verb_object_stanza", "dtype": "bool"}, {"name": "fits_requirements", "dtype": "bool"}, {"name": "owner", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 237352522, "num_examples": 742125}], "download_size": 114567812, "dataset_size": 237352522}} | 2023-01-17T14:31:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "analysed-diff-metadata"
More Information needed | [
"# Dataset Card for \"analysed-diff-metadata\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"analysed-diff-metadata\"\n\nMore Information needed"
] |
e4cd8ce4cf25ff25443e7b64657665a5735e5eb7 | # Dataset Card for "squad_id"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rahmanfadhil/squad_v2_id | [
"region:us"
] | 2023-01-12T11:01:07+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int32"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 121632833, "num_examples": 130318}, {"name": "validation", "num_bytes": 12218827, "num_examples": 11858}], "download_size": 0, "dataset_size": 133851660}} | 2023-01-12T11:14:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_id"
More Information needed | [
"# Dataset Card for \"squad_id\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_id\"\n\nMore Information needed"
] |
4459a61365cbfc407774c2093b966bfdc12a1f06 | # Dataset Card for "chicago_early_childhood_education_centers"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dmargutierrez/chicago_early_childhood_education_centers | [
"region:us"
] | 2023-01-12T11:04:04+00:00 | {"dataset_info": {"features": [{"name": "Id", "dtype": "int64"}, {"name": "Site name", "dtype": "string"}, {"name": "Address", "dtype": "string"}, {"name": "Zip", "dtype": "float64"}, {"name": "Phone", "dtype": "float64"}, {"name": "Program Name", "dtype": "string"}, {"name": "Length of Day", "dtype": "string"}, {"name": "Neighborhood", "dtype": "string"}, {"name": "Funded Enrollment", "dtype": "string"}, {"name": "Program Option", "dtype": "string"}, {"name": "Eearly Head Start Fund", "dtype": "string"}, {"name": "CC fund", "dtype": "string"}, {"name": "Progmod", "dtype": "string"}, {"name": "Website", "dtype": "string"}, {"name": "Center Director", "dtype": "string"}, {"name": "ECE Available Programs", "dtype": "string"}, {"name": "NAEYC Valid Until", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1", "2": "2", "3": "3", "4": "4", "5": "5", "6": "6", "7": "7", "8": "8", "9": "9", "10": "10", "11": "11", "12": "12", "13": "339", "14": "13", "15": "14", "16": "15", "17": "16", "18": "17", "19": "18", "20": "19", "21": "20", "22": "21", "23": "22", "24": "23", "25": "24", "26": "25", "27": "26", "28": "27", "29": "386", "30": "28", "31": "29", "32": "30", "33": "31", "34": "32", "35": "33", "36": "34", "37": "35", "38": "36", "39": "37", "40": "38", "41": "39", "42": "40", "43": "41", "44": "42", "45": "43", "46": "44", "47": "45", "48": "46", "49": "47", "50": "48", "51": "49", "52": "50", "53": "51", "54": "52", "55": "53", "56": "54", "57": "55", "58": "56", "59": "57", "60": "58", "61": "59", "62": "60", "63": "61", "64": "62", "65": "63", "66": "64", "67": "65", "68": "66", "69": "67", "70": "68", "71": "69", "72": "70", "73": "71", "74": "72", "75": "73", "76": "74", "77": "75", "78": "875", "79": "884", "80": "76", "81": "77", "82": "78", "83": "79", "84": "80", "85": "81", "86": "82", "87": "83", "88": "84", "89": "85", "90": "86", "91": "87", "92": "88", "93": "89", "94": "90", "95": "91", "96": "92", "97": "93", "98": "94", "99": "95", "100": "96", "101": "97", "102": "98", "103": "99", "104": "100", "105": "101", "106": "102", "107": "103", "108": "104", "109": "105", "110": "106", "111": "107", "112": "108", "113": "109", "114": "110", "115": "111", "116": "112", "117": "113", "118": "114", "119": "115", "120": "116", "121": "117", "122": "118", "123": "119", "124": "120", "125": "121", "126": "122", "127": "123", "128": "124", "129": "125", "130": "126", "131": "127", "132": "128", "133": "129", "134": "130", "135": "131", "136": "132", "137": "133", "138": "134", "139": "135", "140": "136", "141": "137", "142": "138", "143": "139", "144": "140", "145": "141", "146": "142", "147": "143", "148": "144", "149": "145", "150": "146", "151": "249", "152": "147", "153": "148", "154": "149", "155": "150", "156": "151", "157": "152", "158": "153", "159": "154", "160": "155", "161": "156", "162": "157", "163": "158", "164": "159", "165": "160", "166": "161", "167": "162", "168": "163", "169": "164", "170": "165", "171": "166", "172": "167", "173": "168", "174": "169", "175": "170", "176": "171", "177": "172", "178": "173", "179": "174", "180": "175", "181": "176", "182": "177", "183": "178", "184": "179", "185": "180", "186": "181", "187": "182", "188": "183", "189": "189", "190": "184", "191": "185", "192": "186", "193": "187", "194": "188", "195": "190", "196": "191", "197": "192", "198": "193", "199": "194", "200": "195", "201": "196", "202": "197", "203": "198", "204": "199", "205": "200", "206": "201", "207": "202", "208": "203", "209": "204", "210": "205", "211": "206", "212": "207", "213": "208", "214": "209", "215": "210", "216": "211", "217": "212", "218": "213", "219": "214", "220": "215", "221": "216", "222": "217", "223": "218", "224": "219", "225": "220", "226": "221", "227": "222", "228": "223", "229": "224", "230": "225", "231": "226", "232": "227", "233": "228", "234": "229", "235": "230", "236": "231", "237": "232", "238": "233", "239": "234", "240": "235", "241": "236", "242": "237", "243": "238", "244": "239", "245": "240", "246": "241", "247": "242", "248": "243", "249": "244", "250": "245", "251": "246", "252": "247", "253": "248", "254": "250", "255": "251", "256": "252", "257": "253", "258": "254", "259": "255", "260": "256", "261": "257", "262": "258", "263": "259", "264": "260", "265": "261", "266": "262", "267": "263", "268": "264", "269": "265", "270": "266", "271": "267", "272": "268", "273": "269", "274": "270", "275": "271", "276": "272", "277": "273", "278": "274", "279": "275", "280": "276", "281": "277", "282": "278", "283": "279", "284": "280", "285": "281", "286": "282", "287": "283", "288": "284", "289": "285", "290": "286", "291": "287", "292": "288", "293": "289", "294": "290", "295": "291", "296": "292", "297": "293", "298": "294", "299": "295", "300": "296", "301": "297", "302": "298", "303": "299", "304": "300", "305": "301", "306": "302", "307": "303", "308": "304", "309": "305", "310": "306", "311": "307", "312": "308", "313": "309", "314": "310", "315": "311", "316": "312", "317": "313", "318": "314", "319": "315", "320": "316", "321": "317", "322": "318", "323": "319", "324": "320", "325": "321", "326": "322", "327": "323", "328": "324", "329": "325", "330": "326", "331": "327", "332": "328", "333": "329", "334": "330", "335": "331", "336": "332", "337": "333", "338": "334", "339": "335", "340": "336", "341": "337", "342": "338", "343": "340", "344": "341", "345": "342", "346": "343", "347": "344", "348": "345", "349": "346", "350": "347", "351": "348", "352": "349", "353": "350", "354": "351", "355": "352", "356": "353", "357": "354", "358": "355", "359": "356", "360": "357", "361": "358", "362": "359", "363": "360", "364": "361", "365": "362", "366": "363", "367": "364", "368": "365", "369": "366", "370": "367", "371": "368", "372": "369", "373": "370", "374": "371", "375": "372", "376": "373", "377": "374", "378": "375", "379": "376", "380": "377", "381": "378", "382": "379", "383": "380", "384": "381", "385": "382", "386": "383", "387": "384", "388": "385", "389": "387", "390": "388", "391": "389", "392": "390", "393": "391", "394": "392", "395": "393", "396": "394", "397": "395", "398": "396", "399": "397", "400": "398", "401": "399", "402": "400", "403": "401", "404": "402", "405": "403", "406": "404", "407": "405", "408": "406", "409": "407", "410": "408", "411": "409", "412": "410", "413": "411", "414": "412", "415": "413", "416": "414", "417": "415", "418": "416", "419": "417", "420": "418", "421": "419", "422": "420", "423": "421", "424": "422", "425": "423", "426": "424", "427": "425", "428": "426", "429": "427", "430": "428", "431": "429", "432": "430", "433": "431", "434": "432", "435": "433", "436": "434", "437": "435", "438": "436", "439": "437", "440": "438", "441": "439", "442": "440", "443": "441", "444": "442", "445": "443", "446": "444", "447": "445", "448": "446", "449": "447", "450": "448", "451": "449", "452": "450", "453": "451", "454": "452", "455": "453", "456": "454", "457": "455", "458": "456", "459": "457", "460": "458", "461": "459", "462": "460", "463": "461", "464": "462", "465": "463", "466": "464", "467": "465", "468": "466", "469": "467", "470": "468", "471": "469", "472": "470", "473": "471", "474": "472", "475": "473", "476": "474", "477": "475", "478": "476", "479": "477", "480": "478", "481": "479", "482": "480", "483": "481", "484": "482", "485": "483", "486": "484", "487": "485", "488": "486", "489": "487", "490": "488", "491": "489", "492": "490", "493": "491", "494": "492", "495": "493", "496": "494", "497": "495", "498": "496", "499": "497", "500": "498", "501": "499", "502": "500", "503": "501", "504": "502", "505": "503", "506": "504", "507": "505", "508": "506", "509": "507", "510": "508", "511": "509", "512": "510", "513": "511", "514": "512", "515": "513", "516": "514", "517": "515", "518": "516", "519": "517", "520": "518", "521": "519", "522": "520", "523": "521", "524": "522", "525": "523", "526": "524", "527": "525", "528": "526", "529": "527", "530": "528", "531": "529", "532": "530", "533": "531", "534": "532", "535": "533", "536": "534", "537": "535", "538": "536", "539": "537", "540": "538", "541": "539", "542": "540", "543": "541", "544": "542", "545": "543", "546": "544", "547": "545", "548": "546", "549": "547", "550": "548", "551": "549", "552": "550", "553": "551", "554": "552", "555": "553", "556": "554", "557": "555", "558": "556", "559": "557", "560": "558", "561": "559", "562": "560", "563": "561", "564": "562", "565": "563", "566": "564", "567": "565", "568": "566", "569": "567", "570": "568", "571": "569", "572": "570", "573": "571", "574": "572", "575": "573", "576": "574", "577": "575", "578": "576", "579": "577", "580": "578", "581": "579", "582": "580", "583": "581", "584": "582", "585": "583", "586": "584", "587": "585", "588": "586", "589": "587", "590": "588", "591": "589", "592": "590", "593": "591", "594": "592", "595": "593", "596": "594", "597": "595", "598": "596", "599": "597", "600": "598", "601": "599", "602": "600", "603": "601", "604": "602", "605": "603", "606": "604", "607": "605", "608": "606", "609": "607", "610": "608", "611": "609", "612": "610", "613": "611", "614": "612", "615": "613", "616": "614", "617": "615", "618": "616", "619": "617", "620": "618", "621": "619", "622": "620", "623": "621", "624": "622", "625": "623", "626": "624", "627": "625", "628": "626", "629": "627", "630": "628", "631": "629", "632": "630", "633": "631", "634": "632", "635": "633", "636": "634", "637": "635", "638": "636", "639": "637", "640": "638", "641": "639", "642": "640", "643": "641", "644": "642", "645": "643", "646": "644", "647": "645", "648": "646", "649": "647", "650": "648", "651": "649", "652": "650", "653": "651", "654": "652", "655": "653", "656": "654", "657": "655", "658": "656", "659": "657", "660": "658", "661": "659", "662": "660", "663": "661", "664": "662", "665": "663", "666": "664", "667": "665", "668": "666", "669": "667", "670": "668", "671": "669", "672": "670", "673": "671", "674": "683", "675": "672", "676": "673", "677": "674", "678": "675", "679": "676", "680": "677", "681": "678", "682": "679", "683": "680", "684": "681", "685": "682", "686": "684", "687": "685", "688": "686", "689": "687", "690": "688", "691": "689", "692": "690", "693": "691", "694": "692", "695": "693", "696": "694", "697": "695", "698": "696", "699": "697", "700": "698", "701": "699", "702": "700", "703": "701", "704": "702", "705": "703", "706": "704", "707": "705", "708": "706", "709": "707", "710": "708", "711": "709", "712": "710", "713": "711", "714": "712", "715": "713", "716": "714", "717": "715", "718": "716", "719": "717", "720": "718", "721": "719", "722": "720", "723": "721", "724": "722", "725": "723", "726": "724", "727": "739", "728": "725", "729": "726", "730": "727", "731": "728", "732": "729", "733": "730", "734": "731", "735": "732", "736": "733", "737": "734", "738": "735", "739": "736", "740": "737", "741": "738", "742": "740", "743": "741", "744": "742", "745": "743", "746": "744", "747": "745", "748": "746", "749": "747", "750": "748", "751": "749", "752": "750", "753": "751", "754": "752", "755": "753", "756": "754", "757": "755", "758": "756", "759": "757", "760": "758", "761": "759", "762": "760", "763": "761", "764": "762", "765": "763", "766": "764", "767": "765", "768": "766", "769": "767", "770": "768", "771": "769", "772": "770", "773": "771", "774": "772", "775": "773", "776": "774", "777": "775", "778": "776", "779": "777", "780": "778", "781": "779", "782": "780", "783": "781", "784": "782", "785": "783", "786": "784", "787": "785", "788": "786", "789": "787", "790": "788", "791": "789", "792": "790", "793": "791", "794": "792", "795": "793", "796": "794", "797": "795", "798": "796", "799": "797", "800": "798", "801": "799", "802": "800", "803": "801", "804": "802", "805": "803", "806": "804", "807": "805", "808": "806", "809": "807", "810": "808", "811": "809", "812": "810", "813": "811", "814": "812", "815": "813", "816": "814", "817": "815", "818": "816", "819": "817", "820": "818", "821": "819", "822": "820", "823": "821", "824": "822", "825": "823", "826": "824", "827": "825", "828": "826", "829": "827", "830": "828", "831": "829", "832": "830", "833": "831", "834": "832", "835": "833", "836": "834", "837": "835", "838": "836", "839": "837", "840": "838", "841": "839", "842": "840", "843": "841", "844": "842", "845": "843", "846": "844", "847": "845", "848": "846", "849": "847", "850": "848", "851": "849", "852": "850", "853": "851", "854": "852", "855": "853", "856": "854", "857": "855", "858": "856", "859": "857", "860": "858", "861": "859", "862": "860", "863": "861", "864": "862", "865": "863", "866": "864", "867": "865", "868": "866", "869": "867", "870": "868", "871": "869", "872": "870", "873": "871", "874": "872", "875": "873", "876": "874", "877": "876", "878": "877", "879": "878", "880": "879", "881": "880", "882": "881", "883": "882", "884": "883", "885": "885", "886": "886", "887": "887", "888": "888", "889": "889", "890": "890", "891": "891", "892": "892", "893": "893", "894": "894", "895": "895", "896": "896", "897": "897", "898": "898", "899": "899", "900": "900", "901": "901", "902": "902", "903": "903", "904": "904", "905": "905", "906": "906", "907": "907", "908": "908", "909": "909", "910": "910", "911": "911", "912": "912", "913": "913", "914": "914", "915": "915", "916": "916", "917": "917", "918": "918", "919": "919", "920": "920", "921": "921", "922": "922", "923": "923", "924": "924", "925": "925", "926": "926", "927": "927", "928": "928", "929": "929", "930": "930", "931": "931", "932": "932", "933": "933", "934": "934", "935": "935", "936": "936", "937": "937", "938": "938", "939": "939", "940": "940", "941": "941", "942": "942", "943": "943", "944": "944", "945": "945", "946": "946", "947": "947", "948": "948", "949": "949", "950": "950", "951": "951", "952": "952", "953": "953", "954": "954", "955": "955", "956": "956", "957": "957", "958": "958", "959": "959", "960": "960", "961": "961", "962": "962", "963": "963", "964": "964", "965": "965", "966": "966", "967": "967", "968": "968", "969": "969", "970": "970", "971": "971", "972": "972", "973": "973", "974": "974", "975": "975", "976": "976", "977": "977", "978": "978", "979": "979", "980": "980", "981": "981", "982": "982", "983": "983", "984": "984", "985": "985", "986": "986", "987": "987", "988": "988", "989": "989", "990": "990", "991": "991", "992": "992", "993": "993", "994": "994", "995": "995", "996": "996", "997": "997", "998": "998", "999": "999", "1000": "1000", "1001": "1001", "1002": "1002", "1003": "1003", "1004": "1004", "1005": "1005", "1006": "1006", "1007": "1007", "1008": "1008", "1009": "1009", "1010": "1010", "1011": "1011", "1012": "1012", "1013": "1013", "1014": "1014", "1015": "1015", "1016": "1016", "1017": "1017", "1018": "1018", "1019": "1019", "1020": "1020", "1021": "1021", "1022": "1022", "1023": "1023", "1024": "1024", "1025": "1025", "1026": "1026", "1027": "1027", "1028": "1028", "1029": "1029", "1030": "1030", "1031": "1031", "1032": "1032", "1033": "1033", "1034": "1034", "1035": "1035", "1036": "1036", "1037": "1037", "1038": "1038", "1039": "1039", "1040": "1040", "1041": "1041", "1042": "1042", "1043": "1043", "1044": "1044", "1045": "1045", "1046": "1046", "1047": "1047", "1048": "1048", "1049": "1049", "1050": "1050", "1051": "1051", "1052": "1052", "1053": "1053", "1054": "1054", "1055": "1055", "1056": "1056", "1057": "1057", "1058": "1058", "1059": "1059", "1060": "1060", "1061": "1061", "1062": "1062", "1063": "1063", "1064": "1064", "1065": "1065", "1066": "1066", "1067": "1067", "1068": "1068", "1069": "1069", "1070": "1070", "1071": "1071", "1072": "1072", "1073": "1073", "1074": "1074", "1075": "1075", "1076": "1076", "1077": "1077", "1078": "1078", "1079": "1079", "1080": "1080", "1081": "1081", "1082": "1082", "1083": "1083", "1084": "1084", "1085": "1085", "1086": "1086", "1087": "1087", "1088": "1088", "1089": "1089", "1090": "1090", "1091": "1091", "1092": "1092", "1093": "1093", "1094": "1094", "1095": "1095", "1096": "1096", "1097": "1097", "1098": "1098", "1099": "1099", "1100": "1100", "1101": "1101", "1102": "1102", "1103": "1103", "1104": "1104", "1105": "1105", "1106": "1106", "1107": "1107", "1108": "1108", "1109": "1109", "1110": "1110", "1111": "1111", "1112": "1112", "1113": "1113", "1114": "1114", "1115": "1115", "1116": "1116", "1117": "1117", "1118": "1118", "1119": "1119", "1120": "1120", "1121": "1121", "1122": "1122", "1123": "1123", "1124": "1124", "1125": "1125", "1126": "1126", "1127": "1127", "1128": "1128", "1129": "1129", "1130": "1130", "1131": "1131", "1132": "1132", "1133": "1133", "1134": "1134", "1135": "1135", "1136": "1136", "1137": "1137", "1138": "1138", "1139": "1139", "1140": "1140", "1141": "1141", "1142": "1142", "1143": "1143", "1144": "1144", "1145": "1145", "1146": "1146", "1147": "1147", "1148": "1148", "1149": "1149", "1150": "1150", "1151": "1151", "1152": "1152", "1153": "1153", "1154": "1154", "1155": "1155", "1156": "1156", "1157": "1157", "1158": "1158", "1159": "1159", "1160": "1160", "1161": "1161"}}}}, {"name": "tisix_row_index", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 662438, "num_examples": 3337}], "download_size": 247923, "dataset_size": 662438}} | 2023-01-12T11:04:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "chicago_early_childhood_education_centers"
More Information needed | [
"# Dataset Card for \"chicago_early_childhood_education_centers\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"chicago_early_childhood_education_centers\"\n\nMore Information needed"
] |
b9c1e53e23999d2cd6ff4edc2441cc0c5b224c37 | # Dataset Card for "c_corpus_br_finetuning_language_model_bert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | rosimeirecosta/c_corpus_br_finetuning_language_model_bert | [
"region:us"
] | 2023-01-12T13:40:03+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36065567, "num_examples": 228736}, {"name": "validation", "num_bytes": 9012563, "num_examples": 57184}], "download_size": 0, "dataset_size": 45078130}} | 2023-01-12T14:37:37+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "c_corpus_br_finetuning_language_model_bert"
More Information needed | [
"# Dataset Card for \"c_corpus_br_finetuning_language_model_bert\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"c_corpus_br_finetuning_language_model_bert\"\n\nMore Information needed"
] |
3c830737fc5e984a6415d45923fa575c711338c7 | # Dataset Card for "danbooru_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | leemeng/danbooru_small | [
"region:us"
] | 2023-01-12T13:40:20+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 215463820.99, "num_examples": 1953}], "download_size": 207744589, "dataset_size": 215463820.99}} | 2023-01-12T13:40:43+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "danbooru_small"
More Information needed | [
"# Dataset Card for \"danbooru_small\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"danbooru_small\"\n\nMore Information needed"
] |
c4a4cdc67b77fce148b45484a067957bf75ec4c3 | # Flickr30k (1K test set)
Original paper: [From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions](https://aclanthology.org/Q14-1006)
Homepage: https://shannon.cs.illinois.edu/DenotationGraph/
1K test set split from: http://cs.stanford.edu/people/karpathy/deepimagesent/caption_datasets.zip
Bibtex:
```
@article{young2014image,
title={From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions},
author={Young, Peter and Lai, Alice and Hodosh, Micah and Hockenmaier, Julia},
journal={Transactions of the Association for Computational Linguistics},
volume={2},
pages={67--78},
year={2014},
publisher={MIT Press}
}
``` | nlphuji/flickr_1k_test_image_text_retrieval | [
"region:us"
] | 2023-01-12T14:36:57+00:00 | {} | 2023-01-14T19:54:08+00:00 | [] | [] | TAGS
#region-us
| # Flickr30k (1K test set)
Original paper: From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions
Homepage: URL
1K test set split from: URL
Bibtex:
| [
"# Flickr30k (1K test set)\n\nOriginal paper: From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions\n\nHomepage: URL\n\n1K test set split from: URL\n\nBibtex:"
] | [
"TAGS\n#region-us \n",
"# Flickr30k (1K test set)\n\nOriginal paper: From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions\n\nHomepage: URL\n\n1K test set split from: URL\n\nBibtex:"
] |
551c4f7667f06fa82b4ef0a07617bfc4cf324ac3 | # MSCOCO (5K test set)
Original paper: [Microsoft COCO: Common Objects in Context
](https://arxiv.org/abs/1405.0312)
Homepage: https://cocodataset.org/#home
5K test set split from: http://cs.stanford.edu/people/karpathy/deepimagesent/caption_datasets.zip
Bibtex:
```
@inproceedings{lin2014microsoft,
title={Microsoft coco: Common objects in context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
booktitle={European conference on computer vision},
pages={740--755},
year={2014},
organization={Springer}
}
``` | nlphuji/mscoco_2014_5k_test_image_text_retrieval | [
"arxiv:1405.0312",
"region:us"
] | 2023-01-12T14:37:24+00:00 | {} | 2023-01-18T00:08:42+00:00 | [
"1405.0312"
] | [] | TAGS
#arxiv-1405.0312 #region-us
| # MSCOCO (5K test set)
Original paper: Microsoft COCO: Common Objects in Context
Homepage: URL
5K test set split from: URL
Bibtex:
| [
"# MSCOCO (5K test set)\n\nOriginal paper: Microsoft COCO: Common Objects in Context\n\n\nHomepage: URL\n\n5K test set split from: URL\n\nBibtex:"
] | [
"TAGS\n#arxiv-1405.0312 #region-us \n",
"# MSCOCO (5K test set)\n\nOriginal paper: Microsoft COCO: Common Objects in Context\n\n\nHomepage: URL\n\n5K test set split from: URL\n\nBibtex:"
] |
0dcf8311cae5a45dee0ded3fea676a1551c1cd68 |
# Dataset Card for HumSet
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [http://blog.thedeep.io/humset/](http://blog.thedeep.io/humset/)
- **Repository:** [https://github.com/the-deep/humset](https://github.com/the-deep/humset)
- **Paper:** [EMNLP Findings 2022](https://aclanthology.org/2022.findings-emnlp.321)
- **Leaderboard:**
- **Point of Contact:**[the DEEP NLP team](mailto:[email protected])
### Dataset Summary
HumSet is a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three languages of English, French, and Spanish, originally taken from publicly-available resources. For each document, analysts have identified informative snippets (entries) in respect to common humanitarian frameworks, and assigned one or many classes to each entry. See the our paper for details.
### Supported Tasks and Leaderboards
This dataset is intended for multi-label classification
### Languages
This dataset is in English, French and Spanish
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- **entry_id**: unique identification number for a given entry. (string)
- **lead_id**: unique identification number for the document to which the corrisponding entry belongs. (string)
- **project_id** unique identification number for the project to which the corrisponding entry belongs. (string)
- **sectors**, **pillars_1d**, **pillars_2d**, **subpillars_1d**, **subpillars_2d**: labels assigned to the corresponding entry. Since this is a multi-label dataset (each entry may have several annotations belonging to the same category), they are reported as arrays of strings. See the paper for a detailed description of these categories. (list)
- **lang**: language. (str)
- **n_tokens**: number of tokens (tokenized using NLTK v3.7 library). (int64)
- **project_title**: the name of the project where the corresponding annotation was created. (str)
- **created_at**: date and time of creation of the annotation in stardard ISO 8601 format. (str)
- **document**: document URL source of the excerpt. (str)
- **excerpt**: excerpt text. (str)
### Data Splits
The dataset includes a set of train/validation/test splits, with 117435, 16039 and 15147 examples respectively.
## Dataset Creation
The collection originated from a multi-organizational platform called <em>the Data Entry and Exploration Platform (DEEP)</em> developed and maintained by Data Friendly Space (DFS). The platform facilitates classifying primarily qualitative information with respect to analysis frameworks and allows for collaborative classification and annotation of secondary data.
### Curation Rationale
[More Information Needed]
### Source Data
Documents are selected from different sources, ranging from official reports by humanitarian organizations to international and national media articles. See the paper for more informations.
#### Initial Data Collection and Normalization
#### Who are the source language producers?
[More Information Needed]
#### Annotation process
HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three
languages of English, French, and Spanish, originally taken from publicly-available resources. For
each document, analysts have identified informative snippets (entries, or excerpt in the imported dataset) with respect to common <em>humanitarian frameworks</em> and assigned one or many classes to each entry.
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
NLP team at [Data Friendly Space](https://datafriendlyspace.org/)
### Licensing Information
The GitHub repository which houses this dataset has an Apache License 2.0.
### Citation Information
```
@inproceedings{fekih-etal-2022-humset,
title = "{H}um{S}et: Dataset of Multilingual Information Extraction and Classification for Humanitarian Crises Response",
author = "Fekih, Selim and
Tamagnone, Nicolo{'} and
Minixhofer, Benjamin and
Shrestha, Ranjan and
Contla, Ximena and
Oglethorpe, Ewan and
Rekabsaz, Navid",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-emnlp.321",
pages = "4379--4389",
}
```
| nlp-thedeep/humset | [
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_categories:token-classification",
"task_ids:multi-label-classification",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"language:fr",
"language:es",
"license:apache-2.0",
"humanitarian",
"research",
"analytical-framework",
"multilabel",
"humset",
"humbert",
"region:us"
] | 2023-01-12T16:00:58+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en", "fr", "es"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-retrieval", "token-classification"], "task_ids": ["multi-label-classification"], "pretty_name": "HumSet", "tags": ["humanitarian", "research", "analytical-framework", "multilabel", "humset", "humbert"], "dataset_info": {"features": [{"name": "entry_id", "dtype": "string"}, {"name": "lead_id", "dtype": "string"}, {"name": "project_id", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "n_tokens", "dtype": "int64"}, {"name": "project_title", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "document", "dtype": "string"}, {"name": "excerpt", "dtype": "string"}, {"name": "sectors", "sequence": {"class_label": {"names": {"0": "Agriculture", "1": "Cross", "2": "Education", "3": "Food Security", "4": "Health", "5": "Livelihoods", "6": "Logistics", "7": "Nutrition", "8": "Protection", "9": "Shelter", "10": "WASH"}}}}, {"name": "pillars_1d", "sequence": {"class_label": {"names": {"0": "Casualties", "1": "Context", "2": "Covid-19", "3": "Displacement", "4": "Humanitarian Access", "5": "Information And Communication", "6": "Shock/Event"}}}}, {"name": "pillars_2d", "sequence": {"class_label": {"names": {"0": "At Risk", "1": "Capacities & Response", "2": "Humanitarian Conditions", "3": "Impact", "4": "Priority Interventions", "5": "Priority Needs"}}}}, {"name": "subpillars_1d", "sequence": {"class_label": {"names": {"0": "Casualties->Dead", "1": "Casualties->Injured", "2": "Casualties->Missing", "3": "Context->Demography", "4": "Context->Economy", "5": "Context->Environment", "6": "Context->Legal & Policy", "7": "Context->Politics", "8": "Context->Security & Stability", "9": "Context->Socio Cultural", "10": "Covid-19->Cases", "11": "Covid-19->Contact Tracing", "12": "Covid-19->Deaths", "13": "Covid-19->Hospitalization & Care", "14": "Covid-19->Restriction Measures", "15": "Covid-19->Testing", "16": "Covid-19->Vaccination", "17": "Displacement->Intentions", "18": "Displacement->Local Integration", "19": "Displacement->Pull Factors", "20": "Displacement->Push Factors", "21": "Displacement->Type/Numbers/Movements", "22": "Humanitarian Access->Number Of People Facing Humanitarian Access Constraints/Humanitarian Access Gaps", "23": "Humanitarian Access->Physical Constraints", "24": "Humanitarian Access->Population To Relief", "25": "Humanitarian Access->Relief To Population", "26": "Information And Communication->Communication Means And Preferences", "27": "Information And Communication->Information Challenges And Barriers", "28": "Information And Communication->Knowledge And Info Gaps (Hum)", "29": "Information And Communication->Knowledge And Info Gaps (Pop)", "30": "Shock/Event->Hazard & Threats", "31": "Shock/Event->Type And Characteristics", "32": "Shock/Event->Underlying/Aggravating Factors"}}}}, {"name": "subpillars_2d", "sequence": {"class_label": {"names": {"0": "At Risk->Number Of People At Risk", "1": "At Risk->Risk And Vulnerabilities", "2": "Capacities & Response->International Response", "3": "Capacities & Response->Local Response", "4": "Capacities & Response->National Response", "5": "Capacities & Response->Number Of People Reached/Response Gaps", "6": "Humanitarian Conditions->Coping Mechanisms", "7": "Humanitarian Conditions->Living Standards", "8": "Humanitarian Conditions->Number Of People In Need", "9": "Humanitarian Conditions->Physical And Mental Well Being", "10": "Impact->Driver/Aggravating Factors", "11": "Impact->Impact On People", "12": "Impact->Impact On Systems, Services And Networks", "13": "Impact->Number Of People Affected", "14": "Priority Interventions->Expressed By Humanitarian Staff", "15": "Priority Interventions->Expressed By Population", "16": "Priority Needs->Expressed By Humanitarian Staff", "17": "Priority Needs->Expressed By Population"}}}}], "splits": [{"name": "train", "num_examples": 117435}, {"name": "validation", "num_examples": 16039}, {"name": "test", "num_examples": 15147}]}} | 2023-05-25T16:14:31+00:00 | [] | [
"en",
"fr",
"es"
] | TAGS
#task_categories-text-classification #task_categories-text-retrieval #task_categories-token-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-French #language-Spanish #license-apache-2.0 #humanitarian #research #analytical-framework #multilabel #humset #humbert #region-us
|
# Dataset Card for HumSet
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: EMNLP Findings 2022
- Leaderboard:
- Point of Contact:the DEEP NLP team
### Dataset Summary
HumSet is a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three languages of English, French, and Spanish, originally taken from publicly-available resources. For each document, analysts have identified informative snippets (entries) in respect to common humanitarian frameworks, and assigned one or many classes to each entry. See the our paper for details.
### Supported Tasks and Leaderboards
This dataset is intended for multi-label classification
### Languages
This dataset is in English, French and Spanish
## Dataset Structure
### Data Instances
### Data Fields
- entry_id: unique identification number for a given entry. (string)
- lead_id: unique identification number for the document to which the corrisponding entry belongs. (string)
- project_id unique identification number for the project to which the corrisponding entry belongs. (string)
- sectors, pillars_1d, pillars_2d, subpillars_1d, subpillars_2d: labels assigned to the corresponding entry. Since this is a multi-label dataset (each entry may have several annotations belonging to the same category), they are reported as arrays of strings. See the paper for a detailed description of these categories. (list)
- lang: language. (str)
- n_tokens: number of tokens (tokenized using NLTK v3.7 library). (int64)
- project_title: the name of the project where the corresponding annotation was created. (str)
- created_at: date and time of creation of the annotation in stardard ISO 8601 format. (str)
- document: document URL source of the excerpt. (str)
- excerpt: excerpt text. (str)
### Data Splits
The dataset includes a set of train/validation/test splits, with 117435, 16039 and 15147 examples respectively.
## Dataset Creation
The collection originated from a multi-organizational platform called <em>the Data Entry and Exploration Platform (DEEP)</em> developed and maintained by Data Friendly Space (DFS). The platform facilitates classifying primarily qualitative information with respect to analysis frameworks and allows for collaborative classification and annotation of secondary data.
### Curation Rationale
### Source Data
Documents are selected from different sources, ranging from official reports by humanitarian organizations to international and national media articles. See the paper for more informations.
#### Initial Data Collection and Normalization
#### Who are the source language producers?
#### Annotation process
HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three
languages of English, French, and Spanish, originally taken from publicly-available resources. For
each document, analysts have identified informative snippets (entries, or excerpt in the imported dataset) with respect to common <em>humanitarian frameworks</em> and assigned one or many classes to each entry.
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
NLP team at Data Friendly Space
### Licensing Information
The GitHub repository which houses this dataset has an Apache License 2.0.
| [
"# Dataset Card for HumSet",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: EMNLP Findings 2022\n- Leaderboard:\n- Point of Contact:the DEEP NLP team",
"### Dataset Summary\n\nHumSet is a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three languages of English, French, and Spanish, originally taken from publicly-available resources. For each document, analysts have identified informative snippets (entries) in respect to common humanitarian frameworks, and assigned one or many classes to each entry. See the our paper for details.",
"### Supported Tasks and Leaderboards\n\nThis dataset is intended for multi-label classification",
"### Languages\n\nThis dataset is in English, French and Spanish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- entry_id: unique identification number for a given entry. (string)\n- lead_id: unique identification number for the document to which the corrisponding entry belongs. (string)\n- project_id unique identification number for the project to which the corrisponding entry belongs. (string)\n- sectors, pillars_1d, pillars_2d, subpillars_1d, subpillars_2d: labels assigned to the corresponding entry. Since this is a multi-label dataset (each entry may have several annotations belonging to the same category), they are reported as arrays of strings. See the paper for a detailed description of these categories. (list)\n- lang: language. (str)\n- n_tokens: number of tokens (tokenized using NLTK v3.7 library). (int64)\n- project_title: the name of the project where the corresponding annotation was created. (str)\n- created_at: date and time of creation of the annotation in stardard ISO 8601 format. (str)\n- document: document URL source of the excerpt. (str)\n- excerpt: excerpt text. (str)",
"### Data Splits\n\nThe dataset includes a set of train/validation/test splits, with 117435, 16039 and 15147 examples respectively.",
"## Dataset Creation\n\nThe collection originated from a multi-organizational platform called <em>the Data Entry and Exploration Platform (DEEP)</em> developed and maintained by Data Friendly Space (DFS). The platform facilitates classifying primarily qualitative information with respect to analysis frameworks and allows for collaborative classification and annotation of secondary data.",
"### Curation Rationale",
"### Source Data\n\nDocuments are selected from different sources, ranging from official reports by humanitarian organizations to international and national media articles. See the paper for more informations.",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"#### Annotation process\n\nHumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three\nlanguages of English, French, and Spanish, originally taken from publicly-available resources. For\neach document, analysts have identified informative snippets (entries, or excerpt in the imported dataset) with respect to common <em>humanitarian frameworks</em> and assigned one or many classes to each entry.",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nNLP team at Data Friendly Space",
"### Licensing Information\n\nThe GitHub repository which houses this dataset has an Apache License 2.0."
] | [
"TAGS\n#task_categories-text-classification #task_categories-text-retrieval #task_categories-token-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-100K<n<1M #source_datasets-original #language-English #language-French #language-Spanish #license-apache-2.0 #humanitarian #research #analytical-framework #multilabel #humset #humbert #region-us \n",
"# Dataset Card for HumSet",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: EMNLP Findings 2022\n- Leaderboard:\n- Point of Contact:the DEEP NLP team",
"### Dataset Summary\n\nHumSet is a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three languages of English, French, and Spanish, originally taken from publicly-available resources. For each document, analysts have identified informative snippets (entries) in respect to common humanitarian frameworks, and assigned one or many classes to each entry. See the our paper for details.",
"### Supported Tasks and Leaderboards\n\nThis dataset is intended for multi-label classification",
"### Languages\n\nThis dataset is in English, French and Spanish",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- entry_id: unique identification number for a given entry. (string)\n- lead_id: unique identification number for the document to which the corrisponding entry belongs. (string)\n- project_id unique identification number for the project to which the corrisponding entry belongs. (string)\n- sectors, pillars_1d, pillars_2d, subpillars_1d, subpillars_2d: labels assigned to the corresponding entry. Since this is a multi-label dataset (each entry may have several annotations belonging to the same category), they are reported as arrays of strings. See the paper for a detailed description of these categories. (list)\n- lang: language. (str)\n- n_tokens: number of tokens (tokenized using NLTK v3.7 library). (int64)\n- project_title: the name of the project where the corresponding annotation was created. (str)\n- created_at: date and time of creation of the annotation in stardard ISO 8601 format. (str)\n- document: document URL source of the excerpt. (str)\n- excerpt: excerpt text. (str)",
"### Data Splits\n\nThe dataset includes a set of train/validation/test splits, with 117435, 16039 and 15147 examples respectively.",
"## Dataset Creation\n\nThe collection originated from a multi-organizational platform called <em>the Data Entry and Exploration Platform (DEEP)</em> developed and maintained by Data Friendly Space (DFS). The platform facilitates classifying primarily qualitative information with respect to analysis frameworks and allows for collaborative classification and annotation of secondary data.",
"### Curation Rationale",
"### Source Data\n\nDocuments are selected from different sources, ranging from official reports by humanitarian organizations to international and national media articles. See the paper for more informations.",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"#### Annotation process\n\nHumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three\nlanguages of English, French, and Spanish, originally taken from publicly-available resources. For\neach document, analysts have identified informative snippets (entries, or excerpt in the imported dataset) with respect to common <em>humanitarian frameworks</em> and assigned one or many classes to each entry.",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\nNLP team at Data Friendly Space",
"### Licensing Information\n\nThe GitHub repository which houses this dataset has an Apache License 2.0."
] |
f573fc09166c8a89d17893edb74a4d9b3c6932f5 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | hedronstone/dreambooth-hackathon-images | [
"region:us"
] | 2023-01-12T19:34:44+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1396946.0, "num_examples": 5}], "download_size": 1323697, "dataset_size": 1396946.0}} | 2023-01-12T19:34:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dreambooth-hackathon-images"
More Information needed | [
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] |
5015082451567355c73261dcf2c9594b09501e41 | # Dataset Card for "Scored-Summarization-datasets"
A collection of Text summarization datasets geared towards training a multi-purpose text summarizer.
Each dataset is a parquet file with the following features.
#### default
- `text`: a `string` feature. The `source` document
- `summary`: a `string` feature. The summary of the document
- `provenance`: a `string` feature. Information about the sub dataset.
- `t5_text_token_count`: a `int64` feature. The number of tokens the text is encoded in.
- `t5_summary_token_count `: a `int64` feature. The number of tokens the summary is encoded in.
- `contriever_cos`: a `float64` feature. The Cosine Similarity of the Contriever text embedding and Contriever summary embedding.
### Sub-datasets
- billsum
- cnn_dailymail/3.0.0
- multixscience
- newsroom
- samsum
- scitldr/AIC
- tldr-challenge
- wikihow
- xsum
Information about the Contriever model can be found here: https://github.com/facebookresearch/contriever.
| jordiclive/scored_summarization_datasets | [
"region:us"
] | 2023-01-12T20:05:45+00:00 | {} | 2023-02-05T16:14:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Scored-Summarization-datasets"
A collection of Text summarization datasets geared towards training a multi-purpose text summarizer.
Each dataset is a parquet file with the following features.
#### default
- 'text': a 'string' feature. The 'source' document
- 'summary': a 'string' feature. The summary of the document
- 'provenance': a 'string' feature. Information about the sub dataset.
- 't5_text_token_count': a 'int64' feature. The number of tokens the text is encoded in.
- 't5_summary_token_count ': a 'int64' feature. The number of tokens the summary is encoded in.
- 'contriever_cos': a 'float64' feature. The Cosine Similarity of the Contriever text embedding and Contriever summary embedding.
### Sub-datasets
- billsum
- cnn_dailymail/3.0.0
- multixscience
- newsroom
- samsum
- scitldr/AIC
- tldr-challenge
- wikihow
- xsum
Information about the Contriever model can be found here: URL
| [
"# Dataset Card for \"Scored-Summarization-datasets\"\nA collection of Text summarization datasets geared towards training a multi-purpose text summarizer.\n\nEach dataset is a parquet file with the following features.",
"#### default\n- 'text': a 'string' feature. The 'source' document\n- 'summary': a 'string' feature. The summary of the document\n- 'provenance': a 'string' feature. Information about the sub dataset.\n- 't5_text_token_count': a 'int64' feature. The number of tokens the text is encoded in.\n- 't5_summary_token_count ': a 'int64' feature. The number of tokens the summary is encoded in.\n- 'contriever_cos': a 'float64' feature. The Cosine Similarity of the Contriever text embedding and Contriever summary embedding.",
"### Sub-datasets\n- billsum\n- cnn_dailymail/3.0.0\n- multixscience\n- newsroom\n- samsum\n- scitldr/AIC\n- tldr-challenge\n- wikihow\n- xsum\n\nInformation about the Contriever model can be found here: URL"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Scored-Summarization-datasets\"\nA collection of Text summarization datasets geared towards training a multi-purpose text summarizer.\n\nEach dataset is a parquet file with the following features.",
"#### default\n- 'text': a 'string' feature. The 'source' document\n- 'summary': a 'string' feature. The summary of the document\n- 'provenance': a 'string' feature. Information about the sub dataset.\n- 't5_text_token_count': a 'int64' feature. The number of tokens the text is encoded in.\n- 't5_summary_token_count ': a 'int64' feature. The number of tokens the summary is encoded in.\n- 'contriever_cos': a 'float64' feature. The Cosine Similarity of the Contriever text embedding and Contriever summary embedding.",
"### Sub-datasets\n- billsum\n- cnn_dailymail/3.0.0\n- multixscience\n- newsroom\n- samsum\n- scitldr/AIC\n- tldr-challenge\n- wikihow\n- xsum\n\nInformation about the Contriever model can be found here: URL"
] |
5f9e666e90d0ddfd6413089f074019da08cdad52 |
## Dataset Description
- **Repository:** https://github.com/tscheepers/Wikipedia-Summary-Dataset
### Dataset Summary
This is a dataset that can be used for research into machine learning and natural language processing. It contains all titles and summaries (or introductions) of English Wikipedia articles, extracted in September of 2017.
The dataset is different from the regular Wikipedia dump and different from the datasets that can be created by gensim because ours contains the extracted summaries and not the entire unprocessed page body. This could be useful if one wants to use the smaller, more concise, and more definitional summaries in their research. Or if one just wants to use a smaller but still diverse dataset for efficient training with resource constraints.
A summary or introduction of an article is everything starting from the page title up to the content outline.
### Citation Information
```
@mastersthesis{scheepers2017compositionality,
author = {Scheepers, Thijs},
title = {Improving the Compositionality of Word Embeddings},
school = {Universiteit van Amsterdam},
year = {2017},
month = {11},
address = {Science Park 904, Amsterdam, Netherlands}
}
``` | jordiclive/wikipedia-summary-dataset | [
"region:us"
] | 2023-01-12T20:53:47+00:00 | {} | 2023-02-05T16:15:04+00:00 | [] | [] | TAGS
#region-us
|
## Dataset Description
- Repository: URL
### Dataset Summary
This is a dataset that can be used for research into machine learning and natural language processing. It contains all titles and summaries (or introductions) of English Wikipedia articles, extracted in September of 2017.
The dataset is different from the regular Wikipedia dump and different from the datasets that can be created by gensim because ours contains the extracted summaries and not the entire unprocessed page body. This could be useful if one wants to use the smaller, more concise, and more definitional summaries in their research. Or if one just wants to use a smaller but still diverse dataset for efficient training with resource constraints.
A summary or introduction of an article is everything starting from the page title up to the content outline.
| [
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nThis is a dataset that can be used for research into machine learning and natural language processing. It contains all titles and summaries (or introductions) of English Wikipedia articles, extracted in September of 2017.\n\nThe dataset is different from the regular Wikipedia dump and different from the datasets that can be created by gensim because ours contains the extracted summaries and not the entire unprocessed page body. This could be useful if one wants to use the smaller, more concise, and more definitional summaries in their research. Or if one just wants to use a smaller but still diverse dataset for efficient training with resource constraints.\n\nA summary or introduction of an article is everything starting from the page title up to the content outline."
] | [
"TAGS\n#region-us \n",
"## Dataset Description\n\n- Repository: URL",
"### Dataset Summary\n\nThis is a dataset that can be used for research into machine learning and natural language processing. It contains all titles and summaries (or introductions) of English Wikipedia articles, extracted in September of 2017.\n\nThe dataset is different from the regular Wikipedia dump and different from the datasets that can be created by gensim because ours contains the extracted summaries and not the entire unprocessed page body. This could be useful if one wants to use the smaller, more concise, and more definitional summaries in their research. Or if one just wants to use a smaller but still diverse dataset for efficient training with resource constraints.\n\nA summary or introduction of an article is everything starting from the page title up to the content outline."
] |
57eb2a126dfb7d6637b724d9e4930873af99463b | # Dataset Card for "bookcorpus_compact_1024_shard6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_1024_shard6_of_10 | [
"region:us"
] | 2023-01-12T21:22:28+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 769286180, "num_examples": 61605}], "download_size": 387348752, "dataset_size": 769286180}} | 2023-01-12T21:23:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bookcorpus_compact_1024_shard6"
More Information needed | [
"# Dataset Card for \"bookcorpus_compact_1024_shard6\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bookcorpus_compact_1024_shard6\"\n\nMore Information needed"
] |
d8d4d9c0515278f4847e1081ea6571b4ec8ae317 | # Dataset Card for "magic_cards"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | andrewljohnson/magic_cards | [
"region:us"
] | 2023-01-12T21:54:54+00:00 | {"dataset_info": {"features": [{"name": "pixel_values", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 137488238.0, "num_examples": 102}], "download_size": 133768507, "dataset_size": 137488238.0}} | 2023-01-17T23:00:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "magic_cards"
More Information needed | [
"# Dataset Card for \"magic_cards\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"magic_cards\"\n\nMore Information needed"
] |
0f163d5662816ecef645ebc251a309ff8e1b79f5 | # Dataset Card for "banking77_MiniLM_embeddings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | argilla/banking77_MiniLM_embeddings | [
"region:us"
] | 2023-01-12T22:54:57+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "activate_my_card", "1": "age_limit", "2": "apple_pay_or_google_pay", "3": "atm_support", "4": "automatic_top_up", "5": "balance_not_updated_after_bank_transfer", "6": "balance_not_updated_after_cheque_or_cash_deposit", "7": "beneficiary_not_allowed", "8": "cancel_transfer", "9": "card_about_to_expire", "10": "card_acceptance", "11": "card_arrival", "12": "card_delivery_estimate", "13": "card_linking", "14": "card_not_working", "15": "card_payment_fee_charged", "16": "card_payment_not_recognised", "17": "card_payment_wrong_exchange_rate", "18": "card_swallowed", "19": "cash_withdrawal_charge", "20": "cash_withdrawal_not_recognised", "21": "change_pin", "22": "compromised_card", "23": "contactless_not_working", "24": "country_support", "25": "declined_card_payment", "26": "declined_cash_withdrawal", "27": "declined_transfer", "28": "direct_debit_payment_not_recognised", "29": "disposable_card_limits", "30": "edit_personal_details", "31": "exchange_charge", "32": "exchange_rate", "33": "exchange_via_app", "34": "extra_charge_on_statement", "35": "failed_transfer", "36": "fiat_currency_support", "37": "get_disposable_virtual_card", "38": "get_physical_card", "39": "getting_spare_card", "40": "getting_virtual_card", "41": "lost_or_stolen_card", "42": "lost_or_stolen_phone", "43": "order_physical_card", "44": "passcode_forgotten", "45": "pending_card_payment", "46": "pending_cash_withdrawal", "47": "pending_top_up", "48": "pending_transfer", "49": "pin_blocked", "50": "receiving_money", "51": "Refund_not_showing_up", "52": "request_refund", "53": "reverted_card_payment?", "54": "supported_cards_and_currencies", "55": "terminate_account", "56": "top_up_by_bank_transfer_charge", "57": "top_up_by_card_charge", "58": "top_up_by_cash_or_cheque", "59": "top_up_failed", "60": "top_up_limits", "61": "top_up_reverted", "62": "topping_up_by_card", "63": "transaction_charged_twice", "64": "transfer_fee_charged", "65": "transfer_into_account", "66": "transfer_not_received_by_recipient", "67": "transfer_timing", "68": "unable_to_verify_identity", "69": "verify_my_identity", "70": "verify_source_of_funds", "71": "verify_top_up", "72": "virtual_card_not_working", "73": "visa_or_mastercard", "74": "why_verify_identity", "75": "wrong_amount_of_cash_received", "76": "wrong_exchange_rate_for_cash_withdrawal"}}}}, {"name": "vectors", "struct": [{"name": "mini-lm-sentence-transformers", "sequence": "float64"}]}], "splits": [{"name": "test", "num_bytes": 9678090, "num_examples": 3080}], "download_size": 8319885, "dataset_size": 9678090}} | 2023-01-12T22:55:15+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "banking77_MiniLM_embeddings"
More Information needed | [
"# Dataset Card for \"banking77_MiniLM_embeddings\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"banking77_MiniLM_embeddings\"\n\nMore Information needed"
] |
3af6cf2597934f8cf5d798e7a473b69ba454e18d | # Dataset Card for "Jan2023Abstracts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Corran/Jan2023Abstracts | [
"region:us"
] | 2023-01-13T01:45:31+00:00 | {"dataset_info": {"features": [{"name": "corpusid", "dtype": "int64"}, {"name": "openaccessinfo", "struct": [{"name": "externalids", "struct": [{"name": "ACL", "dtype": "string"}, {"name": "ArXiv", "dtype": "string"}, {"name": "DOI", "dtype": "string"}, {"name": "MAG", "dtype": "string"}, {"name": "PubMedCentral", "dtype": "string"}]}, {"name": "license", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "abstract", "dtype": "string"}, {"name": "updated", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 72173232090, "num_examples": 55324451}], "download_size": 43689807417, "dataset_size": 72173232090}} | 2023-01-13T02:11:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Jan2023Abstracts"
More Information needed | [
"# Dataset Card for \"Jan2023Abstracts\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Jan2023Abstracts\"\n\nMore Information needed"
] |
4b5332a8771b3f6388f8ecc51c2b8ade4be31c73 | SMS Spam Multilingual Collection Dataset
Collection of Multilingual SMS messages tagged as spam or legitimate
About Dataset
Context
The SMS Spam Collection is a set of SMS-tagged messages that have been collected for SMS Spam research. It originally contained one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam and later Machine Translated into Hindi, German and French.
The text has been further translated into Spanish, Chinese, Arabic, Bengali, Russian, Portuguese, Indonesian, Urdu, Japanese, Punjabi, Javanese, Turkish, Korean, Marathi, Ukrainian, Swedish, and Norwegian using M2M100_418M a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation created by Facebook AI.
Content
The augmented Dataset contains multilingual text and corresponding labels.
ham- non-spam text
spam- spam text
Acknowledgments
The original English text was taken from- https://www.kaggle.com/uciml/sms-spam-collection-dataset
Hindi, German and French taken from - https://www.kaggle.com/datasets/rajnathpatel/multilingual-spam-data | dbarbedillo/SMS_Spam_Multilingual_Collection_Dataset | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"language:zh",
"language:es",
"language:hi",
"language:fr",
"language:de",
"language:ar",
"language:bn",
"language:ru",
"language:pt",
"language:id",
"language:ur",
"language:ja",
"language:pa",
"language:jv",
"language:tr",
"language:ko",
"language:mr",
"language:uk",
"language:sv",
"language:no",
"license:gpl",
"region:us"
] | 2023-01-13T02:13:03+00:00 | {"language": ["en", "zh", "es", "hi", "fr", "de", "ar", "bn", "ru", "pt", "id", "ur", "ja", "pa", "jv", "tr", "ko", "mr", "uk", "sv", "no"], "license": "gpl", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"]} | 2023-01-13T03:07:17+00:00 | [] | [
"en",
"zh",
"es",
"hi",
"fr",
"de",
"ar",
"bn",
"ru",
"pt",
"id",
"ur",
"ja",
"pa",
"jv",
"tr",
"ko",
"mr",
"uk",
"sv",
"no"
] | TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-English #language-Chinese #language-Spanish #language-Hindi #language-French #language-German #language-Arabic #language-Bengali #language-Russian #language-Portuguese #language-Indonesian #language-Urdu #language-Japanese #language-Panjabi #language-Javanese #language-Turkish #language-Korean #language-Marathi #language-Ukrainian #language-Swedish #language-Norwegian #license-gpl #region-us
| SMS Spam Multilingual Collection Dataset
Collection of Multilingual SMS messages tagged as spam or legitimate
About Dataset
Context
The SMS Spam Collection is a set of SMS-tagged messages that have been collected for SMS Spam research. It originally contained one set of SMS messages in English of 5,574 messages, tagged according to being ham (legitimate) or spam and later Machine Translated into Hindi, German and French.
The text has been further translated into Spanish, Chinese, Arabic, Bengali, Russian, Portuguese, Indonesian, Urdu, Japanese, Punjabi, Javanese, Turkish, Korean, Marathi, Ukrainian, Swedish, and Norwegian using M2M100_418M a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation created by Facebook AI.
Content
The augmented Dataset contains multilingual text and corresponding labels.
ham- non-spam text
spam- spam text
Acknowledgments
The original English text was taken from- URL
Hindi, German and French taken from - URL | [] | [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #language-Chinese #language-Spanish #language-Hindi #language-French #language-German #language-Arabic #language-Bengali #language-Russian #language-Portuguese #language-Indonesian #language-Urdu #language-Japanese #language-Panjabi #language-Javanese #language-Turkish #language-Korean #language-Marathi #language-Ukrainian #language-Swedish #language-Norwegian #license-gpl #region-us \n"
] |
dabc787146a866488b1df9d0493bbff2169875d7 |
# Dataset Card for Wikipedia
This repo is a fork of the original Hugging Face Wikipedia repo [here](https://huggingface.co/datasets/wikipedia).
The difference is that this fork does away with the need for `apache-beam`, and this fork is very fast if you have a lot of CPUs on your machine.
It will use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96.
This fork is also used in the [OLM Project](https://github.com/huggingface/olm-datasets) to pull and process up-to-date wikipedia snapshots.
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(https://dumps.wikimedia.org/) with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
The articles are parsed using the ``mwparserfromhell`` tool, and we use ``multiprocess`` for parallelization.
To load this dataset you need to install these first:
```
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```
Then, you can load any subset of Wikipedia per language and per date this way:
```python
from datasets import load_dataset
load_dataset("olm/wikipedia", language="en", date="20220920")
```
You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
You can find the list of languages [here](https://meta.wikimedia.org/wiki/List_of_Wikipedias).
## Dataset Structure
### Data Instances
An example looks as follows:
```
{'id': '1',
'url': 'https://simple.wikipedia.org/wiki/April',
'title': 'April',
'text': 'April is the fourth month...'
}
```
### Data Fields
The data fields are the same among all configurations:
- `id` (`str`): ID of the article.
- `url` (`str`): URL of the article.
- `title` (`str`): Title of the article.
- `text` (`str`): Text content of the article.
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
### Citation Information
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
| reyoung/wikipedia | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:n<1K",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:aa",
"language:ab",
"language:ace",
"language:af",
"language:ak",
"language:als",
"language:am",
"language:an",
"language:ang",
"language:ar",
"language:arc",
"language:arz",
"language:as",
"language:ast",
"language:atj",
"language:av",
"language:ay",
"language:az",
"language:azb",
"language:ba",
"language:bar",
"language:bcl",
"language:be",
"language:bg",
"language:bh",
"language:bi",
"language:bjn",
"language:bm",
"language:bn",
"language:bo",
"language:bpy",
"language:br",
"language:bs",
"language:bug",
"language:bxr",
"language:ca",
"language:cbk",
"language:cdo",
"language:ce",
"language:ceb",
"language:ch",
"language:cho",
"language:chr",
"language:chy",
"language:ckb",
"language:co",
"language:cr",
"language:crh",
"language:cs",
"language:csb",
"language:cu",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:din",
"language:diq",
"language:dsb",
"language:dty",
"language:dv",
"language:dz",
"language:ee",
"language:el",
"language:eml",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:ext",
"language:fa",
"language:ff",
"language:fi",
"language:fj",
"language:fo",
"language:fr",
"language:frp",
"language:frr",
"language:fur",
"language:fy",
"language:ga",
"language:gag",
"language:gan",
"language:gd",
"language:gl",
"language:glk",
"language:gn",
"language:gom",
"language:gor",
"language:got",
"language:gu",
"language:gv",
"language:ha",
"language:hak",
"language:haw",
"language:he",
"language:hi",
"language:hif",
"language:ho",
"language:hr",
"language:hsb",
"language:ht",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ie",
"language:ig",
"language:ii",
"language:ik",
"language:ilo",
"language:inh",
"language:io",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:jam",
"language:jbo",
"language:jv",
"language:ka",
"language:kaa",
"language:kab",
"language:kbd",
"language:kbp",
"language:kg",
"language:ki",
"language:kj",
"language:kk",
"language:kl",
"language:km",
"language:kn",
"language:ko",
"language:koi",
"language:krc",
"language:ks",
"language:ksh",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lad",
"language:lb",
"language:lbe",
"language:lez",
"language:lfn",
"language:lg",
"language:li",
"language:lij",
"language:lmo",
"language:ln",
"language:lo",
"language:lrc",
"language:lt",
"language:ltg",
"language:lv",
"language:lzh",
"language:mai",
"language:mdf",
"language:mg",
"language:mh",
"language:mhr",
"language:mi",
"language:min",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:ms",
"language:mt",
"language:mus",
"language:mwl",
"language:my",
"language:myv",
"language:mzn",
"language:na",
"language:nah",
"language:nan",
"language:nap",
"language:nds",
"language:ne",
"language:new",
"language:ng",
"language:nl",
"language:nn",
"language:no",
"language:nov",
"language:nrf",
"language:nso",
"language:nv",
"language:ny",
"language:oc",
"language:olo",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pag",
"language:pam",
"language:pap",
"language:pcd",
"language:pdc",
"language:pfl",
"language:pi",
"language:pih",
"language:pl",
"language:pms",
"language:pnb",
"language:pnt",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:rmy",
"language:rn",
"language:ro",
"language:ru",
"language:rue",
"language:rup",
"language:rw",
"language:sa",
"language:sah",
"language:sat",
"language:sc",
"language:scn",
"language:sco",
"language:sd",
"language:se",
"language:sg",
"language:sgs",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:srn",
"language:ss",
"language:st",
"language:stq",
"language:su",
"language:sv",
"language:sw",
"language:szl",
"language:ta",
"language:tcy",
"language:tdt",
"language:te",
"language:tg",
"language:th",
"language:ti",
"language:tk",
"language:tl",
"language:tn",
"language:to",
"language:tpi",
"language:tr",
"language:ts",
"language:tt",
"language:tum",
"language:tw",
"language:ty",
"language:tyv",
"language:udm",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vec",
"language:vep",
"language:vi",
"language:vls",
"language:vo",
"language:vro",
"language:wa",
"language:war",
"language:wo",
"language:wuu",
"language:xal",
"language:xh",
"language:xmf",
"language:yi",
"language:yo",
"language:yue",
"language:za",
"language:zea",
"language:zh",
"language:zu",
"license:cc-by-sa-3.0",
"license:gfdl",
"region:us"
] | 2023-01-13T03:38:06+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["aa", "ab", "ace", "af", "ak", "als", "am", "an", "ang", "ar", "arc", "arz", "as", "ast", "atj", "av", "ay", "az", "azb", "ba", "bar", "bcl", "be", "bg", "bh", "bi", "bjn", "bm", "bn", "bo", "bpy", "br", "bs", "bug", "bxr", "ca", "cbk", "cdo", "ce", "ceb", "ch", "cho", "chr", "chy", "ckb", "co", "cr", "crh", "cs", "csb", "cu", "cv", "cy", "da", "de", "din", "diq", "dsb", "dty", "dv", "dz", "ee", "el", "eml", "en", "eo", "es", "et", "eu", "ext", "fa", "ff", "fi", "fj", "fo", "fr", "frp", "frr", "fur", "fy", "ga", "gag", "gan", "gd", "gl", "glk", "gn", "gom", "gor", "got", "gu", "gv", "ha", "hak", "haw", "he", "hi", "hif", "ho", "hr", "hsb", "ht", "hu", "hy", "ia", "id", "ie", "ig", "ii", "ik", "ilo", "inh", "io", "is", "it", "iu", "ja", "jam", "jbo", "jv", "ka", "kaa", "kab", "kbd", "kbp", "kg", "ki", "kj", "kk", "kl", "km", "kn", "ko", "koi", "krc", "ks", "ksh", "ku", "kv", "kw", "ky", "la", "lad", "lb", "lbe", "lez", "lfn", "lg", "li", "lij", "lmo", "ln", "lo", "lrc", "lt", "ltg", "lv", "lzh", "mai", "mdf", "mg", "mh", "mhr", "mi", "min", "mk", "ml", "mn", "mr", "mrj", "ms", "mt", "mus", "mwl", "my", "myv", "mzn", "na", "nah", "nan", "nap", "nds", "ne", "new", "ng", "nl", "nn", "no", "nov", "nrf", "nso", "nv", "ny", "oc", "olo", "om", "or", "os", "pa", "pag", "pam", "pap", "pcd", "pdc", "pfl", "pi", "pih", "pl", "pms", "pnb", "pnt", "ps", "pt", "qu", "rm", "rmy", "rn", "ro", "ru", "rue", "rup", "rw", "sa", "sah", "sat", "sc", "scn", "sco", "sd", "se", "sg", "sgs", "sh", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "srn", "ss", "st", "stq", "su", "sv", "sw", "szl", "ta", "tcy", "tdt", "te", "tg", "th", "ti", "tk", "tl", "tn", "to", "tpi", "tr", "ts", "tt", "tum", "tw", "ty", "tyv", "udm", "ug", "uk", "ur", "uz", "ve", "vec", "vep", "vi", "vls", "vo", "vro", "wa", "war", "wo", "wuu", "xal", "xh", "xmf", "yi", "yo", "yue", "za", "zea", "zh", "zu"], "license": ["cc-by-sa-3.0", "gfdl"], "multilinguality": ["multilingual"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Wikipedia", "language_bcp47": ["nds-nl"], "configs": ["20220301.aa", "20220301.ab", "20220301.ace", "20220301.ady", "20220301.af", "20220301.ak", "20220301.als", "20220301.am", "20220301.an", "20220301.ang", "20220301.ar", "20220301.arc", "20220301.arz", "20220301.as", "20220301.ast", "20220301.atj", "20220301.av", "20220301.ay", "20220301.az", "20220301.azb", "20220301.ba", "20220301.bar", "20220301.bat-smg", "20220301.bcl", "20220301.be", "20220301.be-x-old", "20220301.bg", "20220301.bh", "20220301.bi", "20220301.bjn", "20220301.bm", "20220301.bn", "20220301.bo", "20220301.bpy", "20220301.br", "20220301.bs", "20220301.bug", "20220301.bxr", "20220301.ca", "20220301.cbk-zam", "20220301.cdo", "20220301.ce", "20220301.ceb", "20220301.ch", "20220301.cho", "20220301.chr", "20220301.chy", "20220301.ckb", "20220301.co", "20220301.cr", "20220301.crh", "20220301.cs", "20220301.csb", "20220301.cu", "20220301.cv", "20220301.cy", "20220301.da", "20220301.de", "20220301.din", "20220301.diq", "20220301.dsb", "20220301.dty", "20220301.dv", "20220301.dz", "20220301.ee", "20220301.el", "20220301.eml", "20220301.en", "20220301.eo", "20220301.es", "20220301.et", "20220301.eu", "20220301.ext", "20220301.fa", "20220301.ff", "20220301.fi", "20220301.fiu-vro", "20220301.fj", "20220301.fo", "20220301.fr", "20220301.frp", "20220301.frr", "20220301.fur", "20220301.fy", "20220301.ga", "20220301.gag", "20220301.gan", "20220301.gd", "20220301.gl", "20220301.glk", "20220301.gn", "20220301.gom", "20220301.gor", "20220301.got", "20220301.gu", "20220301.gv", "20220301.ha", "20220301.hak", "20220301.haw", "20220301.he", "20220301.hi", "20220301.hif", "20220301.ho", "20220301.hr", "20220301.hsb", "20220301.ht", "20220301.hu", "20220301.hy", "20220301.ia", "20220301.id", "20220301.ie", "20220301.ig", "20220301.ii", "20220301.ik", "20220301.ilo", "20220301.inh", "20220301.io", "20220301.is", "20220301.it", "20220301.iu", "20220301.ja", "20220301.jam", "20220301.jbo", "20220301.jv", "20220301.ka", "20220301.kaa", "20220301.kab", "20220301.kbd", "20220301.kbp", "20220301.kg", "20220301.ki", "20220301.kj", "20220301.kk", "20220301.kl", "20220301.km", "20220301.kn", "20220301.ko", "20220301.koi", "20220301.krc", "20220301.ks", "20220301.ksh", "20220301.ku", "20220301.kv", "20220301.kw", "20220301.ky", "20220301.la", "20220301.lad", "20220301.lb", "20220301.lbe", "20220301.lez", "20220301.lfn", "20220301.lg", "20220301.li", "20220301.lij", "20220301.lmo", "20220301.ln", "20220301.lo", "20220301.lrc", "20220301.lt", "20220301.ltg", "20220301.lv", "20220301.mai", "20220301.map-bms", "20220301.mdf", "20220301.mg", "20220301.mh", "20220301.mhr", "20220301.mi", "20220301.min", "20220301.mk", "20220301.ml", "20220301.mn", "20220301.mr", "20220301.mrj", "20220301.ms", "20220301.mt", "20220301.mus", "20220301.mwl", "20220301.my", "20220301.myv", "20220301.mzn", "20220301.na", "20220301.nah", "20220301.nap", "20220301.nds", "20220301.nds-nl", "20220301.ne", "20220301.new", "20220301.ng", "20220301.nl", "20220301.nn", "20220301.no", "20220301.nov", "20220301.nrm", "20220301.nso", "20220301.nv", "20220301.ny", "20220301.oc", "20220301.olo", "20220301.om", "20220301.or", "20220301.os", "20220301.pa", "20220301.pag", "20220301.pam", "20220301.pap", "20220301.pcd", "20220301.pdc", "20220301.pfl", "20220301.pi", "20220301.pih", "20220301.pl", "20220301.pms", "20220301.pnb", "20220301.pnt", "20220301.ps", "20220301.pt", "20220301.qu", "20220301.rm", "20220301.rmy", "20220301.rn", "20220301.ro", "20220301.roa-rup", "20220301.roa-tara", "20220301.ru", "20220301.rue", "20220301.rw", "20220301.sa", "20220301.sah", "20220301.sat", "20220301.sc", "20220301.scn", "20220301.sco", "20220301.sd", "20220301.se", "20220301.sg", "20220301.sh", "20220301.si", "20220301.simple", "20220301.sk", "20220301.sl", "20220301.sm", "20220301.sn", "20220301.so", "20220301.sq", "20220301.sr", "20220301.srn", "20220301.ss", "20220301.st", "20220301.stq", "20220301.su", "20220301.sv", "20220301.sw", "20220301.szl", "20220301.ta", "20220301.tcy", "20220301.te", "20220301.tet", "20220301.tg", "20220301.th", "20220301.ti", "20220301.tk", "20220301.tl", "20220301.tn", "20220301.to", "20220301.tpi", "20220301.tr", "20220301.ts", "20220301.tt", "20220301.tum", "20220301.tw", "20220301.ty", "20220301.tyv", "20220301.udm", "20220301.ug", "20220301.uk", "20220301.ur", "20220301.uz", "20220301.ve", "20220301.vec", "20220301.vep", "20220301.vi", "20220301.vls", "20220301.vo", "20220301.wa", "20220301.war", "20220301.wo", "20220301.wuu", "20220301.xal", "20220301.xh", "20220301.xmf", "20220301.yi", "20220301.yo", "20220301.za", "20220301.zea", "20220301.zh", "20220301.zh-classical", "20220301.zh-min-nan", "20220301.zh-yue", "20220301.zu"]} | 2023-01-13T08:42:26+00:00 | [] | [
"aa",
"ab",
"ace",
"af",
"ak",
"als",
"am",
"an",
"ang",
"ar",
"arc",
"arz",
"as",
"ast",
"atj",
"av",
"ay",
"az",
"azb",
"ba",
"bar",
"bcl",
"be",
"bg",
"bh",
"bi",
"bjn",
"bm",
"bn",
"bo",
"bpy",
"br",
"bs",
"bug",
"bxr",
"ca",
"cbk",
"cdo",
"ce",
"ceb",
"ch",
"cho",
"chr",
"chy",
"ckb",
"co",
"cr",
"crh",
"cs",
"csb",
"cu",
"cv",
"cy",
"da",
"de",
"din",
"diq",
"dsb",
"dty",
"dv",
"dz",
"ee",
"el",
"eml",
"en",
"eo",
"es",
"et",
"eu",
"ext",
"fa",
"ff",
"fi",
"fj",
"fo",
"fr",
"frp",
"frr",
"fur",
"fy",
"ga",
"gag",
"gan",
"gd",
"gl",
"glk",
"gn",
"gom",
"gor",
"got",
"gu",
"gv",
"ha",
"hak",
"haw",
"he",
"hi",
"hif",
"ho",
"hr",
"hsb",
"ht",
"hu",
"hy",
"ia",
"id",
"ie",
"ig",
"ii",
"ik",
"ilo",
"inh",
"io",
"is",
"it",
"iu",
"ja",
"jam",
"jbo",
"jv",
"ka",
"kaa",
"kab",
"kbd",
"kbp",
"kg",
"ki",
"kj",
"kk",
"kl",
"km",
"kn",
"ko",
"koi",
"krc",
"ks",
"ksh",
"ku",
"kv",
"kw",
"ky",
"la",
"lad",
"lb",
"lbe",
"lez",
"lfn",
"lg",
"li",
"lij",
"lmo",
"ln",
"lo",
"lrc",
"lt",
"ltg",
"lv",
"lzh",
"mai",
"mdf",
"mg",
"mh",
"mhr",
"mi",
"min",
"mk",
"ml",
"mn",
"mr",
"mrj",
"ms",
"mt",
"mus",
"mwl",
"my",
"myv",
"mzn",
"na",
"nah",
"nan",
"nap",
"nds",
"ne",
"new",
"ng",
"nl",
"nn",
"no",
"nov",
"nrf",
"nso",
"nv",
"ny",
"oc",
"olo",
"om",
"or",
"os",
"pa",
"pag",
"pam",
"pap",
"pcd",
"pdc",
"pfl",
"pi",
"pih",
"pl",
"pms",
"pnb",
"pnt",
"ps",
"pt",
"qu",
"rm",
"rmy",
"rn",
"ro",
"ru",
"rue",
"rup",
"rw",
"sa",
"sah",
"sat",
"sc",
"scn",
"sco",
"sd",
"se",
"sg",
"sgs",
"sh",
"si",
"sk",
"sl",
"sm",
"sn",
"so",
"sq",
"sr",
"srn",
"ss",
"st",
"stq",
"su",
"sv",
"sw",
"szl",
"ta",
"tcy",
"tdt",
"te",
"tg",
"th",
"ti",
"tk",
"tl",
"tn",
"to",
"tpi",
"tr",
"ts",
"tt",
"tum",
"tw",
"ty",
"tyv",
"udm",
"ug",
"uk",
"ur",
"uz",
"ve",
"vec",
"vep",
"vi",
"vls",
"vo",
"vro",
"wa",
"war",
"wo",
"wuu",
"xal",
"xh",
"xmf",
"yi",
"yo",
"yue",
"za",
"zea",
"zh",
"zu"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-multilingual #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #language-Afar #language-Abkhazian #language-Achinese #language-Afrikaans #language-Akan #language-Tosk Albanian #language-Amharic #language-Aragonese #language-Old English (ca. 450-1100) #language-Arabic #language-Official Aramaic (700-300 BCE) #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Atikamekw #language-Avaric #language-Aymara #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Bavarian #language-Central Bikol #language-Belarusian #language-Bulgarian #language-bh #language-Bislama #language-Banjar #language-Bambara #language-Bengali #language-Tibetan #language-Bishnupriya #language-Breton #language-Bosnian #language-Buginese #language-Russia Buriat #language-Catalan #language-Chavacano #language-Min Dong Chinese #language-Chechen #language-Cebuano #language-Chamorro #language-Choctaw #language-Cherokee #language-Cheyenne #language-Central Kurdish #language-Corsican #language-Cree #language-Crimean Tatar #language-Czech #language-Kashubian #language-Church Slavic #language-Chuvash #language-Welsh #language-Danish #language-German #language-Dinka #language-Dimli (individual language) #language-Lower Sorbian #language-Dotyali #language-Dhivehi #language-Dzongkha #language-Ewe #language-Modern Greek (1453-) #language-Emiliano-Romagnolo #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Extremaduran #language-Persian #language-Fulah #language-Finnish #language-Fijian #language-Faroese #language-French #language-Arpitan #language-Northern Frisian #language-Friulian #language-Western Frisian #language-Irish #language-Gagauz #language-Gan Chinese #language-Scottish Gaelic #language-Galician #language-Gilaki #language-Guarani #language-Goan Konkani #language-Gorontalo #language-Gothic #language-Gujarati #language-Manx #language-Hausa #language-Hakka Chinese #language-Hawaiian #language-Hebrew #language-Hindi #language-Fiji Hindi #language-Hiri Motu #language-Croatian #language-Upper Sorbian #language-Haitian #language-Hungarian #language-Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Interlingue #language-Igbo #language-Sichuan Yi #language-Inupiaq #language-Iloko #language-Ingush #language-Ido #language-Icelandic #language-Italian #language-Inuktitut #language-Japanese #language-Jamaican Creole English #language-Lojban #language-Javanese #language-Georgian #language-Kara-Kalpak #language-Kabyle #language-Kabardian #language-Kabiyè #language-Kongo #language-Kikuyu #language-Kuanyama #language-Kazakh #language-Kalaallisut #language-Khmer #language-Kannada #language-Korean #language-Komi-Permyak #language-Karachay-Balkar #language-Kashmiri #language-Kölsch #language-Kurdish #language-Komi #language-Cornish #language-Kirghiz #language-Latin #language-Ladino #language-Luxembourgish #language-Lak #language-Lezghian #language-Lingua Franca Nova #language-Ganda #language-Limburgan #language-Ligurian #language-Lombard #language-Lingala #language-Lao #language-Northern Luri #language-Lithuanian #language-Latgalian #language-Latvian #language-Literary Chinese #language-Maithili #language-Moksha #language-Malagasy #language-Marshallese #language-Eastern Mari #language-Maori #language-Minangkabau #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Western Mari #language-Malay (macrolanguage) #language-Maltese #language-Creek #language-Mirandese #language-Burmese #language-Erzya #language-Mazanderani #language-Nauru #language-nah #language-Min Nan Chinese #language-Neapolitan #language-Low German #language-Nepali (macrolanguage) #language-Newari #language-Ndonga #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Novial #language-Jèrriais #language-Pedi #language-Navajo #language-Nyanja #language-Occitan (post 1500) #language-Livvi #language-Oromo #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Pangasinan #language-Pampanga #language-Papiamento #language-Picard #language-Pennsylvania German #language-Pfaelzisch #language-Pali #language-Pitcairn-Norfolk #language-Polish #language-Piemontese #language-Western Panjabi #language-Pontic #language-Pushto #language-Portuguese #language-Quechua #language-Romansh #language-Vlax Romani #language-Rundi #language-Romanian #language-Russian #language-Rusyn #language-Macedo-Romanian #language-Kinyarwanda #language-Sanskrit #language-Yakut #language-Santali #language-Sardinian #language-Sicilian #language-Scots #language-Sindhi #language-Northern Sami #language-Sango #language-Samogitian #language-Serbo-Croatian #language-Sinhala #language-Slovak #language-Slovenian #language-Samoan #language-Shona #language-Somali #language-Albanian #language-Serbian #language-Sranan Tongo #language-Swati #language-Southern Sotho #language-Saterfriesisch #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Silesian #language-Tamil #language-Tulu #language-Tetun Dili #language-Telugu #language-Tajik #language-Thai #language-Tigrinya #language-Turkmen #language-Tagalog #language-Tswana #language-Tonga (Tonga Islands) #language-Tok Pisin #language-Turkish #language-Tsonga #language-Tatar #language-Tumbuka #language-Twi #language-Tahitian #language-Tuvinian #language-Udmurt #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Venda #language-Venetian #language-Veps #language-Vietnamese #language-Vlaams #language-Volapük #language-Võro #language-Walloon #language-Waray (Philippines) #language-Wolof #language-Wu Chinese #language-Kalmyk #language-Xhosa #language-Mingrelian #language-Yiddish #language-Yoruba #language-Yue Chinese #language-Zhuang #language-Zeeuws #language-Chinese #language-Zulu #license-cc-by-sa-3.0 #license-gfdl #region-us
|
# Dataset Card for Wikipedia
This repo is a fork of the original Hugging Face Wikipedia repo here.
The difference is that this fork does away with the need for 'apache-beam', and this fork is very fast if you have a lot of CPUs on your machine.
It will use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96.
This fork is also used in the OLM Project to pull and process up-to-date wikipedia snapshots.
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Point of Contact:
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The datasets are built from the Wikipedia dump
(URL with one split per language. Each example
contains the content of one full Wikipedia article with cleaning to strip
markdown and unwanted sections (references, etc.).
The articles are parsed using the ''mwparserfromhell'' tool, and we use ''multiprocess'' for parallelization.
To load this dataset you need to install these first:
Then, you can load any subset of Wikipedia per language and per date this way:
You can find the full list of languages and dates here.
### Supported Tasks and Leaderboards
The dataset is generally used for Language Modeling.
### Languages
You can find the list of languages here.
## Dataset Structure
### Data Instances
An example looks as follows:
### Data Fields
The data fields are the same among all configurations:
- 'id' ('str'): ID of the article.
- 'url' ('str'): URL of the article.
- 'title' ('str'): Title of the article.
- 'text' ('str'): Text content of the article.
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
Creative Commons Attribution-ShareAlike 3.0 Unported License
(CC BY-SA) and the GNU Free Documentation License
(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
| [
"# Dataset Card for Wikipedia\n\nThis repo is a fork of the original Hugging Face Wikipedia repo here.\nThe difference is that this fork does away with the need for 'apache-beam', and this fork is very fast if you have a lot of CPUs on your machine.\nIt will use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96.\nThis fork is also used in the OLM Project to pull and process up-to-date wikipedia snapshots.",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact:",
"### Dataset Summary\n\nWikipedia dataset containing cleaned articles of all languages.\nThe datasets are built from the Wikipedia dump\n(URL with one split per language. Each example\ncontains the content of one full Wikipedia article with cleaning to strip\nmarkdown and unwanted sections (references, etc.).\n\nThe articles are parsed using the ''mwparserfromhell'' tool, and we use ''multiprocess'' for parallelization.\n\nTo load this dataset you need to install these first:\n\n\n\nThen, you can load any subset of Wikipedia per language and per date this way:\n\n\n\nYou can find the full list of languages and dates here.",
"### Supported Tasks and Leaderboards\n\nThe dataset is generally used for Language Modeling.",
"### Languages\n\nYou can find the list of languages here.",
"## Dataset Structure",
"### Data Instances\n\nAn example looks as follows:",
"### Data Fields\n\nThe data fields are the same among all configurations:\n\n- 'id' ('str'): ID of the article.\n- 'url' ('str'): URL of the article.\n- 'title' ('str'): Title of the article.\n- 'text' ('str'): Text content of the article.",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nMost of Wikipedia's text and many of its images are co-licensed under the\nCreative Commons Attribution-ShareAlike 3.0 Unported License\n(CC BY-SA) and the GNU Free Documentation License\n(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts). \n\nSome text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such\ntext will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes\nthe text."
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-multilingual #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #source_datasets-original #language-Afar #language-Abkhazian #language-Achinese #language-Afrikaans #language-Akan #language-Tosk Albanian #language-Amharic #language-Aragonese #language-Old English (ca. 450-1100) #language-Arabic #language-Official Aramaic (700-300 BCE) #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Atikamekw #language-Avaric #language-Aymara #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Bavarian #language-Central Bikol #language-Belarusian #language-Bulgarian #language-bh #language-Bislama #language-Banjar #language-Bambara #language-Bengali #language-Tibetan #language-Bishnupriya #language-Breton #language-Bosnian #language-Buginese #language-Russia Buriat #language-Catalan #language-Chavacano #language-Min Dong Chinese #language-Chechen #language-Cebuano #language-Chamorro #language-Choctaw #language-Cherokee #language-Cheyenne #language-Central Kurdish #language-Corsican #language-Cree #language-Crimean Tatar #language-Czech #language-Kashubian #language-Church Slavic #language-Chuvash #language-Welsh #language-Danish #language-German #language-Dinka #language-Dimli (individual language) #language-Lower Sorbian #language-Dotyali #language-Dhivehi #language-Dzongkha #language-Ewe #language-Modern Greek (1453-) #language-Emiliano-Romagnolo #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Extremaduran #language-Persian #language-Fulah #language-Finnish #language-Fijian #language-Faroese #language-French #language-Arpitan #language-Northern Frisian #language-Friulian #language-Western Frisian #language-Irish #language-Gagauz #language-Gan Chinese #language-Scottish Gaelic #language-Galician #language-Gilaki #language-Guarani #language-Goan Konkani #language-Gorontalo #language-Gothic #language-Gujarati #language-Manx #language-Hausa #language-Hakka Chinese #language-Hawaiian #language-Hebrew #language-Hindi #language-Fiji Hindi #language-Hiri Motu #language-Croatian #language-Upper Sorbian #language-Haitian #language-Hungarian #language-Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Interlingue #language-Igbo #language-Sichuan Yi #language-Inupiaq #language-Iloko #language-Ingush #language-Ido #language-Icelandic #language-Italian #language-Inuktitut #language-Japanese #language-Jamaican Creole English #language-Lojban #language-Javanese #language-Georgian #language-Kara-Kalpak #language-Kabyle #language-Kabardian #language-Kabiyè #language-Kongo #language-Kikuyu #language-Kuanyama #language-Kazakh #language-Kalaallisut #language-Khmer #language-Kannada #language-Korean #language-Komi-Permyak #language-Karachay-Balkar #language-Kashmiri #language-Kölsch #language-Kurdish #language-Komi #language-Cornish #language-Kirghiz #language-Latin #language-Ladino #language-Luxembourgish #language-Lak #language-Lezghian #language-Lingua Franca Nova #language-Ganda #language-Limburgan #language-Ligurian #language-Lombard #language-Lingala #language-Lao #language-Northern Luri #language-Lithuanian #language-Latgalian #language-Latvian #language-Literary Chinese #language-Maithili #language-Moksha #language-Malagasy #language-Marshallese #language-Eastern Mari #language-Maori #language-Minangkabau #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Western Mari #language-Malay (macrolanguage) #language-Maltese #language-Creek #language-Mirandese #language-Burmese #language-Erzya #language-Mazanderani #language-Nauru #language-nah #language-Min Nan Chinese #language-Neapolitan #language-Low German #language-Nepali (macrolanguage) #language-Newari #language-Ndonga #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Novial #language-Jèrriais #language-Pedi #language-Navajo #language-Nyanja #language-Occitan (post 1500) #language-Livvi #language-Oromo #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Pangasinan #language-Pampanga #language-Papiamento #language-Picard #language-Pennsylvania German #language-Pfaelzisch #language-Pali #language-Pitcairn-Norfolk #language-Polish #language-Piemontese #language-Western Panjabi #language-Pontic #language-Pushto #language-Portuguese #language-Quechua #language-Romansh #language-Vlax Romani #language-Rundi #language-Romanian #language-Russian #language-Rusyn #language-Macedo-Romanian #language-Kinyarwanda #language-Sanskrit #language-Yakut #language-Santali #language-Sardinian #language-Sicilian #language-Scots #language-Sindhi #language-Northern Sami #language-Sango #language-Samogitian #language-Serbo-Croatian #language-Sinhala #language-Slovak #language-Slovenian #language-Samoan #language-Shona #language-Somali #language-Albanian #language-Serbian #language-Sranan Tongo #language-Swati #language-Southern Sotho #language-Saterfriesisch #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Silesian #language-Tamil #language-Tulu #language-Tetun Dili #language-Telugu #language-Tajik #language-Thai #language-Tigrinya #language-Turkmen #language-Tagalog #language-Tswana #language-Tonga (Tonga Islands) #language-Tok Pisin #language-Turkish #language-Tsonga #language-Tatar #language-Tumbuka #language-Twi #language-Tahitian #language-Tuvinian #language-Udmurt #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Venda #language-Venetian #language-Veps #language-Vietnamese #language-Vlaams #language-Volapük #language-Võro #language-Walloon #language-Waray (Philippines) #language-Wolof #language-Wu Chinese #language-Kalmyk #language-Xhosa #language-Mingrelian #language-Yiddish #language-Yoruba #language-Yue Chinese #language-Zhuang #language-Zeeuws #language-Chinese #language-Zulu #license-cc-by-sa-3.0 #license-gfdl #region-us \n",
"# Dataset Card for Wikipedia\n\nThis repo is a fork of the original Hugging Face Wikipedia repo here.\nThe difference is that this fork does away with the need for 'apache-beam', and this fork is very fast if you have a lot of CPUs on your machine.\nIt will use all CPUs available to create a clean Wikipedia pretraining dataset. It takes less than an hour to process all of English wikipedia on a GCP n1-standard-96.\nThis fork is also used in the OLM Project to pull and process up-to-date wikipedia snapshots.",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: \n- Paper: \n- Point of Contact:",
"### Dataset Summary\n\nWikipedia dataset containing cleaned articles of all languages.\nThe datasets are built from the Wikipedia dump\n(URL with one split per language. Each example\ncontains the content of one full Wikipedia article with cleaning to strip\nmarkdown and unwanted sections (references, etc.).\n\nThe articles are parsed using the ''mwparserfromhell'' tool, and we use ''multiprocess'' for parallelization.\n\nTo load this dataset you need to install these first:\n\n\n\nThen, you can load any subset of Wikipedia per language and per date this way:\n\n\n\nYou can find the full list of languages and dates here.",
"### Supported Tasks and Leaderboards\n\nThe dataset is generally used for Language Modeling.",
"### Languages\n\nYou can find the list of languages here.",
"## Dataset Structure",
"### Data Instances\n\nAn example looks as follows:",
"### Data Fields\n\nThe data fields are the same among all configurations:\n\n- 'id' ('str'): ID of the article.\n- 'url' ('str'): URL of the article.\n- 'title' ('str'): Title of the article.\n- 'text' ('str'): Text content of the article.",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nMost of Wikipedia's text and many of its images are co-licensed under the\nCreative Commons Attribution-ShareAlike 3.0 Unported License\n(CC BY-SA) and the GNU Free Documentation License\n(GFDL) (unversioned, with no invariant sections, front-cover texts, or back-cover texts). \n\nSome text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such\ntext will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes\nthe text."
] |
a0bd0040fd37862e01d1290349e14131a457fba7 |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | JunRyeol/jr_dataset | [
"region:us"
] | 2023-01-13T03:59:01+00:00 | {} | 2023-03-03T06:01:42+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
b04ef7c220aea95e783f65f26b8361a30bd38972 |
# Dataset Card for "WS POS Model Tune"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/ayuhamaro/nlp-model-tune
- **Paper:** [More Information Needed]
- **Leaderboard:** [If the dataset supports an active leaderboard, add link here]()
- **Point of Contact:** [More Information Needed]
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions | ayuhamaro/ws-pos-model-tune | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:zh",
"license:unknown",
"region:us"
] | 2023-01-13T06:23:33+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["zh"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "paperswithcode_id": "ws-pos-model-tune", "pretty_name": "WS POS Model Tune", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ws_tags", "sequence": {"class_label": {"names": {"0": "B,", "1": "I"}}}}, {"name": "pos_tags", "sequence": {"class_label": {"names": {"0": "A,", "1": "Caa,", "2": "Cab,", "3": "Cba,", "4": "Cbb,", "5": "D,", "6": "Da,", "7": "Dfa,", "8": "Dfb,", "9": "Di,", "10": "Dk,", "11": "DM,", "12": "I,", "13": "Na,", "14": "Nb,", "15": "Nc,", "16": "Ncd,", "17": "Nd,", "18": "Nep,", "19": "Neqa,", "20": "Neqb,", "21": "Nes,", "22": "Neu,", "23": "Nf,", "24": "Ng,", "25": "Nh,", "26": "Nv,", "27": "P,", "28": "T,", "29": "VA,", "30": "VAC,", "31": "VB,", "32": "VC,", "33": "VCL,", "34": "VD,", "35": "VF,", "36": "VE,", "37": "VG,", "38": "VH,", "39": "VHC,", "40": "VI,", "41": "VJ,", "42": "VK,", "43": "VL,", "44": "V_2,", "45": "DE,", "46": "SHI,", "47": "FW,", "48": "COLONCATEGORY,", "49": "COMMACATEGORY,", "50": "DASHCATEGORY,", "51": "DOTCATEGORY,", "52": "ETCCATEGORY,", "53": "EXCLAMATIONCATEGORY,", "54": "PARENTHESISCATEGORY,", "55": "PAUSECATEGORY,", "56": "PERIODCATEGORY,", "57": "QUESTIONCATEGORY,", "58": "SEMICOLONCATEGORY,", "59": "SPCHANGECATEGORY"}}}}], "splits": [{"name": "train", "num_bytes": 1024, "num_examples": 1}], "download_size": 1024, "dataset_size": 1024}, "train-eval-index": [{"config": "default", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"tokens": "tokens", "ner_tags": "tags"}, "metrics": [{"type": "seqeval", "name": "seqeval"}]}]} | 2023-01-13T07:19:38+00:00 | [] | [
"zh"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-unknown #region-us
|
# Dataset Card for "WS POS Model Tune"
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: None
- Repository: URL
- Paper:
- Leaderboard: [If the dataset supports an active leaderboard, add link here]()
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions | [
"# Dataset Card for \"WS POS Model Tune\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: \n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Chinese #license-unknown #region-us \n",
"# Dataset Card for \"WS POS Model Tune\"",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: None\n- Repository: URL\n- Paper: \n- Leaderboard: [If the dataset supports an active leaderboard, add link here]()\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
fb93d76be56463cbd79290166c016934059cab50 | # Dataset Card for "markhor-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ihanif/markhor-images | [
"region:us"
] | 2023-01-13T11:38:26+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1008453.0, "num_examples": 15}], "download_size": 1005068, "dataset_size": 1008453.0}} | 2023-01-13T11:38:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "markhor-images"
More Information needed | [
"# Dataset Card for \"markhor-images\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"markhor-images\"\n\nMore Information needed"
] |
88d756fe42b30317764ca8661c2c940dbb77b8ff |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://figshare.com/articles/dataset/Dataset_NER_Manufacturing_-_FabNER_Information_Extraction_from_Manufacturing_Process_Science_Domain_Literature_Using_Named_Entity_Recognition/14782407](https://figshare.com/articles/dataset/Dataset_NER_Manufacturing_-_FabNER_Information_Extraction_from_Manufacturing_Process_Science_Domain_Literature_Using_Named_Entity_Recognition/14782407)
- **Paper:** ["FabNER": information extraction from manufacturing process science domain literature using named entity recognition](https://par.nsf.gov/servlets/purl/10290810)
- **Size of downloaded dataset files:** 3.79 MB
- **Size of the generated dataset:** 6.27 MB
### Dataset Summary
FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.
It is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process
science research.
For every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),
Machine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),
Parameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and
BioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:
B=Beginning, I-Intermediate, O=Outside, E=End, S=Single.
For details about the dataset, please refer to the paper: ["FabNER": information extraction from manufacturing process science domain literature using named entity recognition](https://par.nsf.gov/servlets/purl/10290810)
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 3.79 MB
- **Size of the generated dataset:** 6.27 MB
An example of 'train' looks as follows:
```json
{
"id": "0",
"tokens": ["Revealed", "the", "location-specific", "flow", "patterns", "and", "quantified", "the", "speeds", "of", "various", "types", "of", "flow", "."],
"ner_tags": [0, 0, 0, 46, 49, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
}
```
### Data Fields
#### fabner
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "B-MATE": 1, "I-MATE": 2, "O-MATE": 3, "E-MATE": 4, "S-MATE": 5, "B-MANP": 6, "I-MANP": 7, "O-MANP": 8, "E-MANP": 9, "S-MANP": 10, "B-MACEQ": 11, "I-MACEQ": 12, "O-MACEQ": 13, "E-MACEQ": 14, "S-MACEQ": 15, "B-APPL": 16, "I-APPL": 17, "O-APPL": 18, "E-APPL": 19, "S-APPL": 20, "B-FEAT": 21, "I-FEAT": 22, "O-FEAT": 23, "E-FEAT": 24, "S-FEAT": 25, "B-PRO": 26, "I-PRO": 27, "O-PRO": 28, "E-PRO": 29, "S-PRO": 30, "B-CHAR": 31, "I-CHAR": 32, "O-CHAR": 33, "E-CHAR": 34, "S-CHAR": 35, "B-PARA": 36, "I-PARA": 37, "O-PARA": 38, "E-PARA": 39, "S-PARA": 40, "B-ENAT": 41, "I-ENAT": 42, "O-ENAT": 43, "E-ENAT": 44, "S-ENAT": 45, "B-CONPRI": 46, "I-CONPRI": 47, "O-CONPRI": 48, "E-CONPRI": 49, "S-CONPRI": 50, "B-MANS": 51, "I-MANS": 52, "O-MANS": 53, "E-MANS": 54, "S-MANS": 55, "B-BIOP": 56, "I-BIOP": 57, "O-BIOP": 58, "E-BIOP": 59, "S-BIOP": 60}
```
#### fabner_bio
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "B-MATE": 1, "I-MATE": 2, "B-MANP": 3, "I-MANP": 4, "B-MACEQ": 5, "I-MACEQ": 6, "B-APPL": 7, "I-APPL": 8, "B-FEAT": 9, "I-FEAT": 10, "B-PRO": 11, "I-PRO": 12, "B-CHAR": 13, "I-CHAR": 14, "B-PARA": 15, "I-PARA": 16, "B-ENAT": 17, "I-ENAT": 18, "B-CONPRI": 19, "I-CONPRI": 20, "B-MANS": 21, "I-MANS": 22, "B-BIOP": 23, "I-BIOP": 24}
```
#### fabner_simple
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "MATE": 1, "MANP": 2, "MACEQ": 3, "APPL": 4, "FEAT": 5, "PRO": 6, "CHAR": 7, "PARA": 8, "ENAT": 9, "CONPRI": 10, "MANS": 11, "BIOP": 12}
```
#### text2tech
- `id`: the instance id of this sentence, a `string` feature.
- `tokens`: the list of tokens of this sentence, a `list` of `string` features.
- `ner_tags`: the list of entity tags, a `list` of classification labels.
```json
{"O": 0, "Technological System": 1, "Method": 2, "Material": 3, "Technical Field": 4}
```
### Data Splits
| | Train | Dev | Test |
|--------|-------|------|------|
| fabner | 9435 | 2183 | 2064 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{DBLP:journals/jim/KumarS22,
author = {Aman Kumar and
Binil Starly},
title = {"FabNER": information extraction from manufacturing process science
domain literature using named entity recognition},
journal = {J. Intell. Manuf.},
volume = {33},
number = {8},
pages = {2393--2407},
year = {2022},
url = {https://doi.org/10.1007/s10845-021-01807-x},
doi = {10.1007/s10845-021-01807-x},
timestamp = {Sun, 13 Nov 2022 17:52:57 +0100},
biburl = {https://dblp.org/rec/journals/jim/KumarS22.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@phucdev](https://github.com/phucdev) for adding this dataset. | DFKI-SLT/fabner | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"manufacturing",
"2000-2020",
"region:us"
] | 2023-01-13T13:01:38+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "FabNER is a manufacturing text dataset for Named Entity Recognition.", "tags": ["manufacturing", "2000-2020"], "dataset_info": [{"config_name": "fabner", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-MATE", "2": "I-MATE", "3": "O-MATE", "4": "E-MATE", "5": "S-MATE", "6": "B-MANP", "7": "I-MANP", "8": "O-MANP", "9": "E-MANP", "10": "S-MANP", "11": "B-MACEQ", "12": "I-MACEQ", "13": "O-MACEQ", "14": "E-MACEQ", "15": "S-MACEQ", "16": "B-APPL", "17": "I-APPL", "18": "O-APPL", "19": "E-APPL", "20": "S-APPL", "21": "B-FEAT", "22": "I-FEAT", "23": "O-FEAT", "24": "E-FEAT", "25": "S-FEAT", "26": "B-PRO", "27": "I-PRO", "28": "O-PRO", "29": "E-PRO", "30": "S-PRO", "31": "B-CHAR", "32": "I-CHAR", "33": "O-CHAR", "34": "E-CHAR", "35": "S-CHAR", "36": "B-PARA", "37": "I-PARA", "38": "O-PARA", "39": "E-PARA", "40": "S-PARA", "41": "B-ENAT", "42": "I-ENAT", "43": "O-ENAT", "44": "E-ENAT", "45": "S-ENAT", "46": "B-CONPRI", "47": "I-CONPRI", "48": "O-CONPRI", "49": "E-CONPRI", "50": "S-CONPRI", "51": "B-MANS", "52": "I-MANS", "53": "O-MANS", "54": "E-MANS", "55": "S-MANS", "56": "B-BIOP", "57": "I-BIOP", "58": "O-BIOP", "59": "E-BIOP", "60": "S-BIOP"}}}}], "splits": [{"name": "train", "num_bytes": 4394010, "num_examples": 9435}, {"name": "validation", "num_bytes": 934347, "num_examples": 2183}, {"name": "test", "num_bytes": 940136, "num_examples": 2064}], "download_size": 3793613, "dataset_size": 6268493}, {"config_name": "fabner_bio", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "B-MATE", "2": "I-MATE", "3": "B-MANP", "4": "I-MANP", "5": "B-MACEQ", "6": "I-MACEQ", "7": "B-APPL", "8": "I-APPL", "9": "B-FEAT", "10": "I-FEAT", "11": "B-PRO", "12": "I-PRO", "13": "B-CHAR", "14": "I-CHAR", "15": "B-PARA", "16": "I-PARA", "17": "B-ENAT", "18": "I-ENAT", "19": "B-CONPRI", "20": "I-CONPRI", "21": "B-MANS", "22": "I-MANS", "23": "B-BIOP", "24": "I-BIOP"}}}}], "splits": [{"name": "train", "num_bytes": 4394010, "num_examples": 9435}, {"name": "validation", "num_bytes": 934347, "num_examples": 2183}, {"name": "test", "num_bytes": 940136, "num_examples": 2064}], "download_size": 3793613, "dataset_size": 6268493}, {"config_name": "fabner_simple", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "MATE", "2": "MANP", "3": "MACEQ", "4": "APPL", "5": "FEAT", "6": "PRO", "7": "CHAR", "8": "PARA", "9": "ENAT", "10": "CONPRI", "11": "MANS", "12": "BIOP"}}}}], "splits": [{"name": "train", "num_bytes": 4394010, "num_examples": 9435}, {"name": "validation", "num_bytes": 934347, "num_examples": 2183}, {"name": "test", "num_bytes": 940136, "num_examples": 2064}], "download_size": 3793613, "dataset_size": 6268493}, {"config_name": "text2tech", "features": [{"name": "id", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": {"class_label": {"names": {"0": "O", "1": "Technological System", "2": "Method", "3": "Material", "4": "Technical Field"}}}}], "splits": [{"name": "train", "num_bytes": 4394010, "num_examples": 9435}, {"name": "validation", "num_bytes": 934347, "num_examples": 2183}, {"name": "test", "num_bytes": 940136, "num_examples": 2064}], "download_size": 3793613, "dataset_size": 6268493}]} | 2023-04-05T22:20:21+00:00 | [] | [
"en"
] | TAGS
#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #manufacturing #2000-2020 #region-us
| Dataset Card for [Dataset Name]
===============================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: URL
* Paper: "FabNER": information extraction from manufacturing process science domain literature using named entity recognition
* Size of downloaded dataset files: 3.79 MB
* Size of the generated dataset: 6.27 MB
### Dataset Summary
FabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.
It is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process
science research.
For every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),
Machine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),
Parameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and
BioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:
B=Beginning, I-Intermediate, O=Outside, E=End, S=Single.
For details about the dataset, please refer to the paper: "FabNER": information extraction from manufacturing process science domain literature using named entity recognition
### Supported Tasks and Leaderboards
### Languages
The language in the dataset is English.
Dataset Structure
-----------------
### Data Instances
* Size of downloaded dataset files: 3.79 MB
* Size of the generated dataset: 6.27 MB
An example of 'train' looks as follows:
### Data Fields
#### fabner
* 'id': the instance id of this sentence, a 'string' feature.
* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.
* 'ner\_tags': the list of entity tags, a 'list' of classification labels.
#### fabner\_bio
* 'id': the instance id of this sentence, a 'string' feature.
* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.
* 'ner\_tags': the list of entity tags, a 'list' of classification labels.
#### fabner\_simple
* 'id': the instance id of this sentence, a 'string' feature.
* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.
* 'ner\_tags': the list of entity tags, a 'list' of classification labels.
#### text2tech
* 'id': the instance id of this sentence, a 'string' feature.
* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.
* 'ner\_tags': the list of entity tags, a 'list' of classification labels.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @phucdev for adding this dataset.
| [
"### Dataset Summary\n\n\nFabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.\nIt is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process\nscience research.\nFor every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),\nMachine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),\nParameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and\nBioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:\nB=Beginning, I-Intermediate, O=Outside, E=End, S=Single.\n\n\nFor details about the dataset, please refer to the paper: \"FabNER\": information extraction from manufacturing process science domain literature using named entity recognition",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe language in the dataset is English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of downloaded dataset files: 3.79 MB\n* Size of the generated dataset: 6.27 MB\n\n\nAn example of 'train' looks as follows:",
"### Data Fields",
"#### fabner\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.\n* 'ner\\_tags': the list of entity tags, a 'list' of classification labels.",
"#### fabner\\_bio\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.\n* 'ner\\_tags': the list of entity tags, a 'list' of classification labels.",
"#### fabner\\_simple\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.\n* 'ner\\_tags': the list of entity tags, a 'list' of classification labels.",
"#### text2tech\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.\n* 'ner\\_tags': the list of entity tags, a 'list' of classification labels.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @phucdev for adding this dataset."
] | [
"TAGS\n#task_categories-token-classification #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-English #license-other #manufacturing #2000-2020 #region-us \n",
"### Dataset Summary\n\n\nFabNER is a manufacturing text corpus of 350,000+ words for Named Entity Recognition.\nIt is a collection of abstracts obtained from Web of Science through known journals available in manufacturing process\nscience research.\nFor every word, there were categories/entity labels defined namely Material (MATE), Manufacturing Process (MANP),\nMachine/Equipment (MACEQ), Application (APPL), Features (FEAT), Mechanical Properties (PRO), Characterization (CHAR),\nParameters (PARA), Enabling Technology (ENAT), Concept/Principles (CONPRI), Manufacturing Standards (MANS) and\nBioMedical (BIOP). Annotation was performed in all categories along with the output tag in 'BIOES' format:\nB=Beginning, I-Intermediate, O=Outside, E=End, S=Single.\n\n\nFor details about the dataset, please refer to the paper: \"FabNER\": information extraction from manufacturing process science domain literature using named entity recognition",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nThe language in the dataset is English.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of downloaded dataset files: 3.79 MB\n* Size of the generated dataset: 6.27 MB\n\n\nAn example of 'train' looks as follows:",
"### Data Fields",
"#### fabner\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.\n* 'ner\\_tags': the list of entity tags, a 'list' of classification labels.",
"#### fabner\\_bio\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.\n* 'ner\\_tags': the list of entity tags, a 'list' of classification labels.",
"#### fabner\\_simple\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.\n* 'ner\\_tags': the list of entity tags, a 'list' of classification labels.",
"#### text2tech\n\n\n* 'id': the instance id of this sentence, a 'string' feature.\n* 'tokens': the list of tokens of this sentence, a 'list' of 'string' features.\n* 'ner\\_tags': the list of entity tags, a 'list' of classification labels.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @phucdev for adding this dataset."
] |
8be26569c0e056f6ceb5adb58dcfce8d5da975b1 | # Dataset Card for "subj"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bstrai/subj | [
"region:us"
] | 2023-01-13T13:18:15+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "objective", "1": "subjective"}}}}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1231802, "num_examples": 8000}, {"name": "test", "num_bytes": 310282, "num_examples": 2000}], "download_size": 945221, "dataset_size": 1542084}} | 2023-01-13T13:19:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "subj"
More Information needed | [
"# Dataset Card for \"subj\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"subj\"\n\nMore Information needed"
] |
6d89b090e9c242901d919352a72f3e7008934f22 | # Dataset Card for "sentiment_analysis_batch"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | torileatherman/sentiment_analysis_batch | [
"region:us"
] | 2023-01-13T13:58:49+00:00 | {"dataset_info": {"features": [{"name": "Headline", "sequence": "int64"}, {"name": "Url", "dtype": "string"}, {"name": "Headline_string", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5984, "num_examples": 10}], "download_size": 3050, "dataset_size": 5984}} | 2023-01-14T09:46:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sentiment_analysis_batch"
More Information needed | [
"# Dataset Card for \"sentiment_analysis_batch\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sentiment_analysis_batch\"\n\nMore Information needed"
] |
c71253cb92ca07b1cd70aff448f87b390d766f84 | # Dataset Card for "SPC-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Spaiche/SPC | [
"region:us"
] | 2023-01-13T14:36:29+00:00 | {"dataset_info": {"features": [{"name": "client_id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "float64"}, {"name": "down_votes", "dtype": "float64"}, {"name": "age", "dtype": "float64"}, {"name": "gender", "dtype": "float64"}, {"name": "accent", "dtype": "float64"}, {"name": "iou_estimate", "dtype": "float64"}], "splits": [{"name": "test", "num_bytes": 831368132.0, "num_examples": 3332}, {"name": "train", "num_bytes": 23839499476.0, "num_examples": 90324}], "download_size": 24065048743, "dataset_size": 24670867608.0}} | 2023-01-13T14:57:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SPC-v2"
More Information needed | [
"# Dataset Card for \"SPC-v2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SPC-v2\"\n\nMore Information needed"
] |
467c79529f58ac5e0d133111cf9dad0a7f94a113 | # Dataset Card for "bookcorpus_compact_1024_shard3_meta"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_1024_shard3_of_10_meta | [
"region:us"
] | 2023-01-13T17:09:16+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}, {"name": "cid_arrangement", "sequence": "int32"}, {"name": "schema_lengths", "sequence": "int64"}, {"name": "topic_entity_mask", "sequence": "int64"}, {"name": "text_lengths", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 7826091649, "num_examples": 61605}], "download_size": 1726433976, "dataset_size": 7826091649}} | 2023-01-13T17:13:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bookcorpus_compact_1024_shard3_meta"
More Information needed | [
"# Dataset Card for \"bookcorpus_compact_1024_shard3_meta\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bookcorpus_compact_1024_shard3_meta\"\n\nMore Information needed"
] |
33c49b76fcf3a18b0d521d8e760c88d49f3e47bc |
# textures-color-1k
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The `textures-color-1k` dataset is an image dataset of 1000+ color image textures in 512x512 resolution with associated text descriptions.
The dataset was created for training/fine-tuning diffusion models on texture generation tasks.
It contains a combination of CC0 procedural and photoscanned PBR materials from [ambientCG](https://ambientcg.com/).
### Languages
The text descriptions are in English, and created by joining the tags of each material with a space character.
## Dataset Structure
### Data Instances
Each data point contains a 512x512 image and and additional `text` feature containing the description of the texture.
### Data Fields
* `image`: the color texture as a PIL image
* `text`: the associated text description created by merging the material's tags
### Data Splits
| | train |
| -- | ----- |
| ambientCG | 1426 |
## Dataset Creation
### Curation Rationale
`textures-color-1k` was created to provide an accesible source of data for automating 3D-asset creation workflows.
The [Dream Textures](https://github.com/carson-katri/dream-textures) add-on is one such tool providing AI automation in Blender.
By fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from [ambientCG](https://ambientcg.com/)'s CC0 textures. Only the color maps were included in this dataset.
Text descriptions were synthesized by joining the tags associated with each material with a space.
## Additional Information
### Dataset Curators
The dataset was created by Carson Katri, with the images being provided by [ambientCG](https://ambientcg.com/).
### Licensing Information
All of the images used in this dataset are CC0.
### Citation Information
[N/A]
### Contributions
Thanks to [@carson-katri](https://github.com/carson-katri) for adding this dataset. | dream-textures/textures-color-1k | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-01-13T17:27:40+00:00 | {"language": ["en"], "license": "cc0-1.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 60933571.47, "num_examples": 1426}], "download_size": 58351352, "dataset_size": 60933571.47}} | 2023-01-13T17:54:04+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #size_categories-1K<n<10K #language-English #license-cc0-1.0 #region-us
| textures-color-1k
=================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
The 'textures-color-1k' dataset is an image dataset of 1000+ color image textures in 512x512 resolution with associated text descriptions.
The dataset was created for training/fine-tuning diffusion models on texture generation tasks.
It contains a combination of CC0 procedural and photoscanned PBR materials from ambientCG.
### Languages
The text descriptions are in English, and created by joining the tags of each material with a space character.
Dataset Structure
-----------------
### Data Instances
Each data point contains a 512x512 image and and additional 'text' feature containing the description of the texture.
### Data Fields
* 'image': the color texture as a PIL image
* 'text': the associated text description created by merging the material's tags
### Data Splits
Dataset Creation
----------------
### Curation Rationale
'textures-color-1k' was created to provide an accesible source of data for automating 3D-asset creation workflows.
The Dream Textures add-on is one such tool providing AI automation in Blender.
By fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from ambientCG's CC0 textures. Only the color maps were included in this dataset.
Text descriptions were synthesized by joining the tags associated with each material with a space.
Additional Information
----------------------
### Dataset Curators
The dataset was created by Carson Katri, with the images being provided by ambientCG.
### Licensing Information
All of the images used in this dataset are CC0.
[N/A]
### Contributions
Thanks to @carson-katri for adding this dataset.
| [
"### Dataset Summary\n\n\nThe 'textures-color-1k' dataset is an image dataset of 1000+ color image textures in 512x512 resolution with associated text descriptions.\nThe dataset was created for training/fine-tuning diffusion models on texture generation tasks.\nIt contains a combination of CC0 procedural and photoscanned PBR materials from ambientCG.",
"### Languages\n\n\nThe text descriptions are in English, and created by joining the tags of each material with a space character.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach data point contains a 512x512 image and and additional 'text' feature containing the description of the texture.",
"### Data Fields\n\n\n* 'image': the color texture as a PIL image\n* 'text': the associated text description created by merging the material's tags",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n'textures-color-1k' was created to provide an accesible source of data for automating 3D-asset creation workflows.\nThe Dream Textures add-on is one such tool providing AI automation in Blender.\nBy fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data was obtained from ambientCG's CC0 textures. Only the color maps were included in this dataset.\n\n\nText descriptions were synthesized by joining the tags associated with each material with a space.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was created by Carson Katri, with the images being provided by ambientCG.",
"### Licensing Information\n\n\nAll of the images used in this dataset are CC0.\n\n\n[N/A]",
"### Contributions\n\n\nThanks to @carson-katri for adding this dataset."
] | [
"TAGS\n#task_categories-text-to-image #size_categories-1K<n<10K #language-English #license-cc0-1.0 #region-us \n",
"### Dataset Summary\n\n\nThe 'textures-color-1k' dataset is an image dataset of 1000+ color image textures in 512x512 resolution with associated text descriptions.\nThe dataset was created for training/fine-tuning diffusion models on texture generation tasks.\nIt contains a combination of CC0 procedural and photoscanned PBR materials from ambientCG.",
"### Languages\n\n\nThe text descriptions are in English, and created by joining the tags of each material with a space character.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach data point contains a 512x512 image and and additional 'text' feature containing the description of the texture.",
"### Data Fields\n\n\n* 'image': the color texture as a PIL image\n* 'text': the associated text description created by merging the material's tags",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n'textures-color-1k' was created to provide an accesible source of data for automating 3D-asset creation workflows.\nThe Dream Textures add-on is one such tool providing AI automation in Blender.\nBy fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data was obtained from ambientCG's CC0 textures. Only the color maps were included in this dataset.\n\n\nText descriptions were synthesized by joining the tags associated with each material with a space.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was created by Carson Katri, with the images being provided by ambientCG.",
"### Licensing Information\n\n\nAll of the images used in this dataset are CC0.\n\n\n[N/A]",
"### Contributions\n\n\nThanks to @carson-katri for adding this dataset."
] |
a75ba66c91c2031dc20c977cf058897430c8b77c |
# textures-normal-1k
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The `textures-normal-1k` dataset is an image dataset of 1000+ normal map textures in 512x512 resolution with associated text descriptions.
The dataset was created for training/fine-tuning models for text to image tasks.
It contains a combination of CC0 procedural and photoscanned PBR materials from [ambientCG](https://ambientcg.com/).
### Languages
The text descriptions are in English, and created by joining the tags of each material with a space character.
## Dataset Structure
### Data Instances
Each data point contains a 512x512 image and and additional `text` feature containing the description of the texture.
### Data Fields
* `image`: the normal map as a PIL image
* `text`: the associated text description created by merging the material's tags
### Data Splits
| | train |
| -- | ----- |
| ambientCG | 1447 |
## Dataset Creation
### Curation Rationale
`textures-normal-1k` was created to provide an accesible source of data for automating 3D-asset creation workflows.
The [Dream Textures](https://github.com/carson-katri/dream-textures) add-on is one such tool providing AI automation in Blender.
By fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from [ambientCG](https://ambientcg.com/)'s CC0 textures. Only the normal maps were included in this dataset.
Text descriptions were synthesized by joining the tags associated with each material with a space.
## Additional Information
### Dataset Curators
The dataset was created by Carson Katri, with the images being provided by [ambientCG](https://ambientcg.com/).
### Licensing Information
All of the images used in this dataset are CC0.
### Citation Information
[N/A]
### Contributions
Thanks to [@carson-katri](https://github.com/carson-katri) for adding this dataset. | dream-textures/textures-normal-1k | [
"task_categories:text-to-image",
"size_categories:1K<n<10K",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-01-13T19:44:42+00:00 | {"language": ["en"], "license": "cc0-1.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-image"], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59834059.794, "num_examples": 1447}], "download_size": 52173880, "dataset_size": 59834059.794}} | 2023-01-13T21:17:22+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #size_categories-1K<n<10K #language-English #license-cc0-1.0 #region-us
| textures-normal-1k
==================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
The 'textures-normal-1k' dataset is an image dataset of 1000+ normal map textures in 512x512 resolution with associated text descriptions.
The dataset was created for training/fine-tuning models for text to image tasks.
It contains a combination of CC0 procedural and photoscanned PBR materials from ambientCG.
### Languages
The text descriptions are in English, and created by joining the tags of each material with a space character.
Dataset Structure
-----------------
### Data Instances
Each data point contains a 512x512 image and and additional 'text' feature containing the description of the texture.
### Data Fields
* 'image': the normal map as a PIL image
* 'text': the associated text description created by merging the material's tags
### Data Splits
Dataset Creation
----------------
### Curation Rationale
'textures-normal-1k' was created to provide an accesible source of data for automating 3D-asset creation workflows.
The Dream Textures add-on is one such tool providing AI automation in Blender.
By fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from ambientCG's CC0 textures. Only the normal maps were included in this dataset.
Text descriptions were synthesized by joining the tags associated with each material with a space.
Additional Information
----------------------
### Dataset Curators
The dataset was created by Carson Katri, with the images being provided by ambientCG.
### Licensing Information
All of the images used in this dataset are CC0.
[N/A]
### Contributions
Thanks to @carson-katri for adding this dataset.
| [
"### Dataset Summary\n\n\nThe 'textures-normal-1k' dataset is an image dataset of 1000+ normal map textures in 512x512 resolution with associated text descriptions.\nThe dataset was created for training/fine-tuning models for text to image tasks.\nIt contains a combination of CC0 procedural and photoscanned PBR materials from ambientCG.",
"### Languages\n\n\nThe text descriptions are in English, and created by joining the tags of each material with a space character.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach data point contains a 512x512 image and and additional 'text' feature containing the description of the texture.",
"### Data Fields\n\n\n* 'image': the normal map as a PIL image\n* 'text': the associated text description created by merging the material's tags",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n'textures-normal-1k' was created to provide an accesible source of data for automating 3D-asset creation workflows.\nThe Dream Textures add-on is one such tool providing AI automation in Blender.\nBy fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data was obtained from ambientCG's CC0 textures. Only the normal maps were included in this dataset.\n\n\nText descriptions were synthesized by joining the tags associated with each material with a space.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was created by Carson Katri, with the images being provided by ambientCG.",
"### Licensing Information\n\n\nAll of the images used in this dataset are CC0.\n\n\n[N/A]",
"### Contributions\n\n\nThanks to @carson-katri for adding this dataset."
] | [
"TAGS\n#task_categories-text-to-image #size_categories-1K<n<10K #language-English #license-cc0-1.0 #region-us \n",
"### Dataset Summary\n\n\nThe 'textures-normal-1k' dataset is an image dataset of 1000+ normal map textures in 512x512 resolution with associated text descriptions.\nThe dataset was created for training/fine-tuning models for text to image tasks.\nIt contains a combination of CC0 procedural and photoscanned PBR materials from ambientCG.",
"### Languages\n\n\nThe text descriptions are in English, and created by joining the tags of each material with a space character.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach data point contains a 512x512 image and and additional 'text' feature containing the description of the texture.",
"### Data Fields\n\n\n* 'image': the normal map as a PIL image\n* 'text': the associated text description created by merging the material's tags",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n'textures-normal-1k' was created to provide an accesible source of data for automating 3D-asset creation workflows.\nThe Dream Textures add-on is one such tool providing AI automation in Blender.\nBy fine-tuning models such as Stable Diffusion on textures, this particular use-case can be more accurately automated.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data was obtained from ambientCG's CC0 textures. Only the normal maps were included in this dataset.\n\n\nText descriptions were synthesized by joining the tags associated with each material with a space.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was created by Carson Katri, with the images being provided by ambientCG.",
"### Licensing Information\n\n\nAll of the images used in this dataset are CC0.\n\n\n[N/A]",
"### Contributions\n\n\nThanks to @carson-katri for adding this dataset."
] |
0856ebb9a85405303a2227fbf41ae814f49fe7d0 |
# textures-color-normal-1k
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The `textures-color-normal-1k` dataset is an image dataset of 1000+ color and normal map textures in 512x512 resolution.
The dataset was created for use in image to image tasks.
It contains a combination of CC0 procedural and photoscanned PBR materials from [ambientCG](https://ambientcg.com/).
## Dataset Structure
### Data Instances
Each data point contains a 512x512 color texture and the corresponding 512x512 normal map.
### Data Fields
* `color`: the color texture as a PIL image
* `normal`: the normal map as a PIL image
### Data Splits
| | train |
| -- | ----- |
| ambientCG | 1426 |
## Dataset Creation
### Curation Rationale
`textures-color-normal-1k` was created to provide an accesible source of data for automating 3D-asset creation workflows.
The [Dream Textures](https://github.com/carson-katri/dream-textures) add-on is one such tool providing AI automation in Blender.
By training models designed for image to image tasks, this particular use-case can be more accurately automated.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from [ambientCG](https://ambientcg.com/)'s CC0 textures. Only the color and normal maps were included in this dataset.
## Additional Information
### Dataset Curators
The dataset was created by Carson Katri, with the images being provided by [ambientCG](https://ambientcg.com/).
### Licensing Information
All of the images used in this dataset are CC0.
### Citation Information
[N/A]
### Contributions
Thanks to [@carson-katri](https://github.com/carson-katri) for adding this dataset. | dream-textures/textures-color-normal-1k | [
"task_categories:image-to-image",
"size_categories:1K<n<10K",
"license:cc0-1.0",
"region:us"
] | 2023-01-13T21:14:42+00:00 | {"license": "cc0-1.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-to-image"], "dataset_info": {"features": [{"name": "color", "dtype": "image"}, {"name": "normal", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 110631687.194, "num_examples": 1426}], "download_size": 111043422, "dataset_size": 110631687.194}} | 2023-01-13T21:20:22+00:00 | [] | [] | TAGS
#task_categories-image-to-image #size_categories-1K<n<10K #license-cc0-1.0 #region-us
| textures-color-normal-1k
========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage:
* Repository:
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
The 'textures-color-normal-1k' dataset is an image dataset of 1000+ color and normal map textures in 512x512 resolution.
The dataset was created for use in image to image tasks.
It contains a combination of CC0 procedural and photoscanned PBR materials from ambientCG.
Dataset Structure
-----------------
### Data Instances
Each data point contains a 512x512 color texture and the corresponding 512x512 normal map.
### Data Fields
* 'color': the color texture as a PIL image
* 'normal': the normal map as a PIL image
### Data Splits
Dataset Creation
----------------
### Curation Rationale
'textures-color-normal-1k' was created to provide an accesible source of data for automating 3D-asset creation workflows.
The Dream Textures add-on is one such tool providing AI automation in Blender.
By training models designed for image to image tasks, this particular use-case can be more accurately automated.
### Source Data
#### Initial Data Collection and Normalization
The data was obtained from ambientCG's CC0 textures. Only the color and normal maps were included in this dataset.
Additional Information
----------------------
### Dataset Curators
The dataset was created by Carson Katri, with the images being provided by ambientCG.
### Licensing Information
All of the images used in this dataset are CC0.
[N/A]
### Contributions
Thanks to @carson-katri for adding this dataset.
| [
"### Dataset Summary\n\n\nThe 'textures-color-normal-1k' dataset is an image dataset of 1000+ color and normal map textures in 512x512 resolution.\nThe dataset was created for use in image to image tasks.\nIt contains a combination of CC0 procedural and photoscanned PBR materials from ambientCG.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach data point contains a 512x512 color texture and the corresponding 512x512 normal map.",
"### Data Fields\n\n\n* 'color': the color texture as a PIL image\n* 'normal': the normal map as a PIL image",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n'textures-color-normal-1k' was created to provide an accesible source of data for automating 3D-asset creation workflows.\nThe Dream Textures add-on is one such tool providing AI automation in Blender.\nBy training models designed for image to image tasks, this particular use-case can be more accurately automated.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data was obtained from ambientCG's CC0 textures. Only the color and normal maps were included in this dataset.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was created by Carson Katri, with the images being provided by ambientCG.",
"### Licensing Information\n\n\nAll of the images used in this dataset are CC0.\n\n\n[N/A]",
"### Contributions\n\n\nThanks to @carson-katri for adding this dataset."
] | [
"TAGS\n#task_categories-image-to-image #size_categories-1K<n<10K #license-cc0-1.0 #region-us \n",
"### Dataset Summary\n\n\nThe 'textures-color-normal-1k' dataset is an image dataset of 1000+ color and normal map textures in 512x512 resolution.\nThe dataset was created for use in image to image tasks.\nIt contains a combination of CC0 procedural and photoscanned PBR materials from ambientCG.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nEach data point contains a 512x512 color texture and the corresponding 512x512 normal map.",
"### Data Fields\n\n\n* 'color': the color texture as a PIL image\n* 'normal': the normal map as a PIL image",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n'textures-color-normal-1k' was created to provide an accesible source of data for automating 3D-asset creation workflows.\nThe Dream Textures add-on is one such tool providing AI automation in Blender.\nBy training models designed for image to image tasks, this particular use-case can be more accurately automated.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nThe data was obtained from ambientCG's CC0 textures. Only the color and normal maps were included in this dataset.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nThe dataset was created by Carson Katri, with the images being provided by ambientCG.",
"### Licensing Information\n\n\nAll of the images used in this dataset are CC0.\n\n\n[N/A]",
"### Contributions\n\n\nThanks to @carson-katri for adding this dataset."
] |
dec357dbbae0a9f4b6bc67c88181671df4da6140 | This dataset contains a pre-processed version from Wikipedia suitable for semantic search.
You can load the dataset like this:
```python
from datasets import load_dataset
lang = 'en'
data = load_dataset(f"Cohere/wikipedia-22-12", lang, split='train', streaming=True)
for row in data:
print(row)
break
```
This will load the dataset in a streaming mode (so that you don't need to download the whole dataset) and you can process it row-by-row.
The articles are splitted into paragraphs. Further, for each article we added statistics on the page views in 2022 as well as in how many other languages an article is available.
The dataset is sorted by page views, so that the most popular Wikipedia articles come first. So if you e.g. read the top-100k rows, you get quite a good coverage on topics that
are broadly interesting for people.
## Semantic Search Embeddings
We also provide versions where documents have been embedded using the [cohere multilingual embedding model](https://txt.cohere.ai/multilingual/),
e.g. [wikipedia-22-12-en-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings) contains the paragraphs and their respective embeddings for English.
You can find the embeddings for other languages in the datasets `wikipedia-22-12-{lang}-embeddings`.
## Dataset Creation
The [XML data dumps](https://dumps.wikimedia.org/backup-index.html) from December 20th, 2022 where downloaded and processed
with [wikiextractor](https://github.com/attardi/wikiextractor) (with Version: 2.75) and the following command:
```
python WikiExtractor.py --json -s --lists ../dumps/dewiki-20210101-pages-articles.xml.bz2 -o text_de
```
To count in how many languages an article is available, we downloaded the SQL files with language links from:
```
https://dumps.wikimedia.org/{lang}wiki/{datestr}/{filename}
```
And processed the SQL file to read for each article the outbound links.
Pageviews where downloaded from:
```
https://dumps.wikimedia.org/other/pageviews/{year}/{year}-{month_str}/pageviews-{year}{month_str}{day_str}-{hour_str}0000.gz
```
We downloaded for each day the pageviews for a random hour. We then computed the harmonic mean of page views. We used harmonic mean to address cases where articles receive
a very high number of page views at e.g. a certain time point. We use the log scores for the page views to increase the numerical stability.
Code to compute the page views was:
```python
import gzip
import sys
from collections import Counter, defaultdict
import math
import tqdm
import json
title_views = {}
#Score: Harmonic mean (View_Day_1 * View_Day_2 * View_day_3)
# Add log for better numerical stabilitiy
# Add +1 to avoid log(0)
# Compare the sum, so that days without view are counted as 0 views
for filepath in tqdm.tqdm(sys.argv[1:]):
with gzip.open(filepath, "rt") as fIn:
for line in fIn:
splits = line.strip().split()
if len(splits) == 4:
lang, title, views, _ = line.strip().split()
lang = lang.lower()
if lang.endswith(".m"): #Add mobile page scores to main score
lang = lang[0:-2]
if lang.count(".") > 0:
continue
if lang not in title_views:
title_views[lang] = {}
if title not in title_views[lang]:
title_views[lang][title] = 0.0
title_views[lang][title] += math.log(int(views)+1)
#Save results
for lang in title_views:
with open(f"pageviews_summary/{lang}.json", "w") as fOut:
fOut.write(json.dumps(title_views[lang]))
```
We filter out paragraphs that start with `BULLET::::`, `Section::::`, `<templatestyles`, or `[[File:`.
Further, we also only include paragraphs with at least 100 characters (using Python len method=.
| Cohere/wikipedia-22-12 | [
"region:us"
] | 2023-01-13T21:52:20+00:00 | {} | 2023-02-22T15:58:09+00:00 | [] | [] | TAGS
#region-us
| This dataset contains a pre-processed version from Wikipedia suitable for semantic search.
You can load the dataset like this:
This will load the dataset in a streaming mode (so that you don't need to download the whole dataset) and you can process it row-by-row.
The articles are splitted into paragraphs. Further, for each article we added statistics on the page views in 2022 as well as in how many other languages an article is available.
The dataset is sorted by page views, so that the most popular Wikipedia articles come first. So if you e.g. read the top-100k rows, you get quite a good coverage on topics that
are broadly interesting for people.
## Semantic Search Embeddings
We also provide versions where documents have been embedded using the cohere multilingual embedding model,
e.g. wikipedia-22-12-en-embeddings contains the paragraphs and their respective embeddings for English.
You can find the embeddings for other languages in the datasets 'wikipedia-22-12-{lang}-embeddings'.
## Dataset Creation
The XML data dumps from December 20th, 2022 where downloaded and processed
with wikiextractor (with Version: 2.75) and the following command:
To count in how many languages an article is available, we downloaded the SQL files with language links from:
And processed the SQL file to read for each article the outbound links.
Pageviews where downloaded from:
We downloaded for each day the pageviews for a random hour. We then computed the harmonic mean of page views. We used harmonic mean to address cases where articles receive
a very high number of page views at e.g. a certain time point. We use the log scores for the page views to increase the numerical stability.
Code to compute the page views was:
We filter out paragraphs that start with 'BULLET::::', 'Section::::', '<templatestyles', or '[[File:'.
Further, we also only include paragraphs with at least 100 characters (using Python len method=.
| [
"## Semantic Search Embeddings\n\nWe also provide versions where documents have been embedded using the cohere multilingual embedding model, \ne.g. wikipedia-22-12-en-embeddings contains the paragraphs and their respective embeddings for English.\nYou can find the embeddings for other languages in the datasets 'wikipedia-22-12-{lang}-embeddings'.",
"## Dataset Creation\nThe XML data dumps from December 20th, 2022 where downloaded and processed \nwith wikiextractor (with Version: 2.75) and the following command:\n\n\nTo count in how many languages an article is available, we downloaded the SQL files with language links from:\n\nAnd processed the SQL file to read for each article the outbound links.\n\nPageviews where downloaded from:\n\n\nWe downloaded for each day the pageviews for a random hour. We then computed the harmonic mean of page views. We used harmonic mean to address cases where articles receive\na very high number of page views at e.g. a certain time point. We use the log scores for the page views to increase the numerical stability.\n\nCode to compute the page views was:\n\n\n\nWe filter out paragraphs that start with 'BULLET::::', 'Section::::', '<templatestyles', or '[[File:'. \nFurther, we also only include paragraphs with at least 100 characters (using Python len method=."
] | [
"TAGS\n#region-us \n",
"## Semantic Search Embeddings\n\nWe also provide versions where documents have been embedded using the cohere multilingual embedding model, \ne.g. wikipedia-22-12-en-embeddings contains the paragraphs and their respective embeddings for English.\nYou can find the embeddings for other languages in the datasets 'wikipedia-22-12-{lang}-embeddings'.",
"## Dataset Creation\nThe XML data dumps from December 20th, 2022 where downloaded and processed \nwith wikiextractor (with Version: 2.75) and the following command:\n\n\nTo count in how many languages an article is available, we downloaded the SQL files with language links from:\n\nAnd processed the SQL file to read for each article the outbound links.\n\nPageviews where downloaded from:\n\n\nWe downloaded for each day the pageviews for a random hour. We then computed the harmonic mean of page views. We used harmonic mean to address cases where articles receive\na very high number of page views at e.g. a certain time point. We use the log scores for the page views to increase the numerical stability.\n\nCode to compute the page views was:\n\n\n\nWe filter out paragraphs that start with 'BULLET::::', 'Section::::', '<templatestyles', or '[[File:'. \nFurther, we also only include paragraphs with at least 100 characters (using Python len method=."
] |
0699fb7d2b88b13b296b5cbf0ea0a10b84e99b3f |
# Wikipedia (hi) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (hi)](https://hi.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-hi-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-hi-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:hi",
"license:apache-2.0",
"region:us"
] | 2023-01-13T23:14:15+00:00 | {"annotations_creators": ["expert-generated"], "language": ["hi"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:53:57+00:00 | [] | [
"hi"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-Hindi #license-apache-2.0 #region-us
|
# Wikipedia (hi) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (hi) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (hi) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (hi) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-Hindi #license-apache-2.0 #region-us \n",
"# Wikipedia (hi) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (hi) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
4f804e9c5125f60783ac45d15ed2687c69489f07 |
# Wikipedia (simple English) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (simple English)](https://simple.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-simple-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-simple-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-simple-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-simple-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-01-13T23:25:25+00:00 | {"language": ["en"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:56:34+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-multilingual #language-English #license-apache-2.0 #region-us
|
# Wikipedia (simple English) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (simple English) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (simple English) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (simple English) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-multilingual #language-English #license-apache-2.0 #region-us \n",
"# Wikipedia (simple English) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (simple English) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
caf814d284f0c7cdf873c1d8d091a3d3b7d9e6db |
# Wikipedia (ko) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ko)](https://ko.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-ko-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:ko",
"license:apache-2.0",
"region:us"
] | 2023-01-13T23:51:11+00:00 | {"language": ["ko"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:55:35+00:00 | [] | [
"ko"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-multilingual #language-Korean #license-apache-2.0 #region-us
|
# Wikipedia (ko) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (ko) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (ko) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (ko) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-multilingual #language-Korean #license-apache-2.0 #region-us \n",
"# Wikipedia (ko) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (ko) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
864ed9e578765742ee3bb0ee5713090bf6a8a31a |
# Wikipedia (zh) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (zh)](https://zh.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-zh-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-zh-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-zh-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-zh-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:zh",
"license:apache-2.0",
"region:us"
] | 2023-01-14T00:44:03+00:00 | {"language": ["zh"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:55:57+00:00 | [] | [
"zh"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-multilingual #language-Chinese #license-apache-2.0 #region-us
|
# Wikipedia (zh) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (zh) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (zh) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (zh) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-multilingual #language-Chinese #license-apache-2.0 #region-us \n",
"# Wikipedia (zh) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (zh) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
ea5f00014bd7626aa55affb07de57d519ab3309a |
# Wikipedia (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ar)](https://ar.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-ar-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:ar",
"license:apache-2.0",
"region:us"
] | 2023-01-14T02:00:24+00:00 | {"annotations_creators": ["expert-generated"], "language": ["ar"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:52:28+00:00 | [] | [
"ar"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-Arabic #license-apache-2.0 #region-us
|
# Wikipedia (ar) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (ar) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (ar) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (ar) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-Arabic #license-apache-2.0 #region-us \n",
"# Wikipedia (ar) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (ar) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
40586a9887f2d274e10e7d365c349b69eb4a03e4 |
# Wikipedia (ja) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (ja)](https://ja.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ja-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-ja-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:multilingual",
"language:ja",
"license:apache-2.0",
"region:us"
] | 2023-01-14T03:52:53+00:00 | {"language": ["ja"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:55:06+00:00 | [] | [
"ja"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-multilingual #language-Japanese #license-apache-2.0 #region-us
|
# Wikipedia (ja) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (ja) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (ja) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (ja) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-multilingual #language-Japanese #license-apache-2.0 #region-us \n",
"# Wikipedia (ja) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (ja) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
c6d28bb2d58ca7d0b9ebc20196c2acf47afa5270 | # Dataset Card for "bookcorpus_compact_512_shard6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | saibo/bookcorpus_compact_512_shard6_of_10 | [
"region:us"
] | 2023-01-14T04:59:41+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "concept_with_offset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 804636647, "num_examples": 121933}], "download_size": 401996995, "dataset_size": 804636647}} | 2023-01-14T05:00:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bookcorpus_compact_512_shard6"
More Information needed | [
"# Dataset Card for \"bookcorpus_compact_512_shard6\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bookcorpus_compact_512_shard6\"\n\nMore Information needed"
] |
cf0ab57fee5fbdf26d83e5859c988a2deb62d20d | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | miguelinc/oratorialab | [
"task_categories:image-classification",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-01-14T05:38:53+00:00 | {"license": "cc-by-sa-4.0", "task_categories": ["image-classification"]} | 2023-01-14T06:03:58+00:00 | [] | [] | TAGS
#task_categories-image-classification #license-cc-by-sa-4.0 #region-us
| # Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#task_categories-image-classification #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
06b607a6df4e7453140e3d0c4cd77c0c061f91f2 |
Dataset for anime person detection.
| Dataset | Train | Test | Validate | Description |
|-------------|-------|------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| v1.1 | 9255 | 460 | 877 | Annotated on the Roboflow platform, including labeled data for various types of anime images (e.g. illustrations, comics). The dataset has also undergone data augmentation techniques to enhance its diversity and quality. |
| raw | 3085 | 460 | 877 | The same as `v1.1` dataset, without any preprocess and data augmentation. Suitable for directly upload to Roboflow platform. |
| AniDet3.v3i | 16124 | 944 | 1709 | Third-party dataset, source: https://universe.roboflow.com/university-of-michigan-ann-arbor/anidet3-ai42v/dataset/3 . The dataset only contains images from anime series. This means the models directly trained on it will not perform well on illustrations and comics. |
The best practice is to combine the `AniDet3.v3i` dataset with the `v1.1` dataset for training. We provide an [online demo](https://huggingface.co/spaces/deepghs/anime_object_detection). | deepghs/anime_person_detection | [
"task_categories:object-detection",
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-01-14T06:50:46+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["object-detection"], "tags": ["art"]} | 2023-05-18T15:26:42+00:00 | [] | [] | TAGS
#task_categories-object-detection #size_categories-1K<n<10K #license-mit #art #region-us
| Dataset for anime person detection.
The best practice is to combine the 'AniDet3.v3i' dataset with the 'v1.1' dataset for training. We provide an online demo.
| [] | [
"TAGS\n#task_categories-object-detection #size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
7a8645307c759f22190194336b0e27c36949d1b5 |
# Wikipedia (it) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (it)](https://it.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-it-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-it-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-it-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-it-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:it",
"license:apache-2.0",
"region:us"
] | 2023-01-14T07:01:23+00:00 | {"annotations_creators": ["expert-generated"], "language": ["it"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:54:18+00:00 | [] | [
"it"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-Italian #license-apache-2.0 #region-us
|
# Wikipedia (it) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (it) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (it) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (it) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-Italian #license-apache-2.0 #region-us \n",
"# Wikipedia (it) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (it) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
dac7bbcb9c7deeb898b12859a2ea9d5b0c1ecc91 | # Dataset Card for "AToMiC-All-Images_wi-pixels"
## Dataset Description
- **Homepage:** [AToMiC homepage](https://trec-atomic.github.io/)
- **Source:** [WIT](https://github.com/google-research-datasets/wit)
- **Paper:** [WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning](https://arxiv.org/abs/2103.01913)
### Languages
The dataset contains 108 languages in Wikipedia.
### Data Instances
Each instance is an image, its representation in bytes, and its associated captions.
### Intended Usage
1. Image collection for Text-to-Image retrieval
2. Image--Caption Retrieval/Generation/Translation
### Licensing Information
[CC BY-SA 4.0 international license](https://creativecommons.org/licenses/by-sa/4.0/)
### Citation Information
TBA
### Acknowledgement
Thanks to:
[img2dataset](https://github.com/rom1504/img2dataset)
[Datasets](https://github.com/huggingface/datasets)
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | TREC-AToMiC/AToMiC-Images-v0.2 | [
"size_categories:100M<n<1B",
"license:cc-by-sa-4.0",
"arxiv:2103.01913",
"region:us"
] | 2023-01-14T08:12:44+00:00 | {"license": "cc-by-sa-4.0", "size_categories": ["100M<n<1B"], "dataset_info": {"features": [{"name": "image_url", "dtype": "string"}, {"name": "image_id", "dtype": "string"}, {"name": "language", "sequence": "string"}, {"name": "caption_reference_description", "sequence": "string"}, {"name": "caption_alt_text_description", "sequence": "string"}, {"name": "caption_attribution_description", "sequence": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 180043531167.75, "num_examples": 11019202}], "download_size": 174258428914, "dataset_size": 180043531167.75}} | 2023-02-14T21:29:39+00:00 | [
"2103.01913"
] | [] | TAGS
#size_categories-100M<n<1B #license-cc-by-sa-4.0 #arxiv-2103.01913 #region-us
| # Dataset Card for "AToMiC-All-Images_wi-pixels"
## Dataset Description
- Homepage: AToMiC homepage
- Source: WIT
- Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning
### Languages
The dataset contains 108 languages in Wikipedia.
### Data Instances
Each instance is an image, its representation in bytes, and its associated captions.
### Intended Usage
1. Image collection for Text-to-Image retrieval
2. Image--Caption Retrieval/Generation/Translation
### Licensing Information
CC BY-SA 4.0 international license
TBA
### Acknowledgement
Thanks to:
img2dataset
Datasets
More Information needed | [
"# Dataset Card for \"AToMiC-All-Images_wi-pixels\"",
"## Dataset Description\n\n- Homepage: AToMiC homepage\n- Source: WIT\n- Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning",
"### Languages\n\nThe dataset contains 108 languages in Wikipedia.",
"### Data Instances\n\nEach instance is an image, its representation in bytes, and its associated captions.",
"### Intended Usage\n\n1. Image collection for Text-to-Image retrieval\n2. Image--Caption Retrieval/Generation/Translation",
"### Licensing Information\n\nCC BY-SA 4.0 international license\n\n\n\nTBA",
"### Acknowledgement\n\nThanks to:\nimg2dataset\nDatasets\n\n\nMore Information needed"
] | [
"TAGS\n#size_categories-100M<n<1B #license-cc-by-sa-4.0 #arxiv-2103.01913 #region-us \n",
"# Dataset Card for \"AToMiC-All-Images_wi-pixels\"",
"## Dataset Description\n\n- Homepage: AToMiC homepage\n- Source: WIT\n- Paper: WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning",
"### Languages\n\nThe dataset contains 108 languages in Wikipedia.",
"### Data Instances\n\nEach instance is an image, its representation in bytes, and its associated captions.",
"### Intended Usage\n\n1. Image collection for Text-to-Image retrieval\n2. Image--Caption Retrieval/Generation/Translation",
"### Licensing Information\n\nCC BY-SA 4.0 international license\n\n\n\nTBA",
"### Acknowledgement\n\nThanks to:\nimg2dataset\nDatasets\n\n\nMore Information needed"
] |
578971f00bb25e8f8908d85555aa2328767dbe0f | # Dataset Card for "lesion_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pranav456/lesion_dataset | [
"region:us"
] | 2023-01-14T09:15:59+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AK", "1": "BCC", "2": "BKL", "3": "DF", "4": "MEL", "5": "NV", "6": "SCC", "7": "VASC"}}}}], "splits": [{"name": "train", "num_bytes": 119842603.034, "num_examples": 20262}, {"name": "test", "num_bytes": 28970560.951, "num_examples": 5069}], "download_size": 142732051, "dataset_size": 148813163.98499998}} | 2023-01-14T09:16:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lesion_dataset"
More Information needed | [
"# Dataset Card for \"lesion_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lesion_dataset\"\n\nMore Information needed"
] |
1cf719df8656d336007786980ce361ae2a85ebdb | # Urdu Summarization
## Dataset Overview
The Urdu Summarization dataset contains news articles in Urdu language along with their summaries. The dataset contains a total of 48,071 news articles collected from the BBC Urdu website. Each article is labeled with its headline, summary, and full text.
## Dataset Details
The dataset contains the following columns:
- id (string): Unique identifier for each article
- url (string): URL for the original article
- title (string): Headline of the article
- summary (string): Summary of the article
- text (string): Full text of the article
The dataset is distributed under the MIT License.
## Data Collection
The data was collected from the BBC Urdu website using web scraping techniques. The articles were published between 2003 and 2020, covering a wide range of topics such as politics, sports, technology, and entertainment.
## Data Preprocessing
The text data was preprocessed to remove any HTML tags and non-Urdu characters. The summaries were created by human annotators, who read the full text of the articles and summarized the main points. The dataset was split into training, validation, and test sets, with 80%, 10%, and 10% of the data in each set respectively.
## Potential Use Cases
This dataset can be used for training and evaluating models for automatic summarization of Urdu text. It can also be used for research in natural language processing, machine learning, and information retrieval.
## Acknowledgements
I thank the BBC Urdu team for publishing the news articles on their website and making them publicly available. We also thank the human annotators who created the summaries for the articles.
## Relevant Papers
No papers have been published yet using this dataset.
## License
The dataset is distributed under the MIT License. | mwz/ursum | [
"task_categories:summarization",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:ur",
"license:mit",
"region:us"
] | 2023-01-14T09:24:32+00:00 | {"language": ["ur"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["summarization", "text-generation", "text2text-generation"], "pretty_name": "ursum"} | 2023-05-14T12:03:37+00:00 | [] | [
"ur"
] | TAGS
#task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-Urdu #license-mit #region-us
| # Urdu Summarization
## Dataset Overview
The Urdu Summarization dataset contains news articles in Urdu language along with their summaries. The dataset contains a total of 48,071 news articles collected from the BBC Urdu website. Each article is labeled with its headline, summary, and full text.
## Dataset Details
The dataset contains the following columns:
- id (string): Unique identifier for each article
- url (string): URL for the original article
- title (string): Headline of the article
- summary (string): Summary of the article
- text (string): Full text of the article
The dataset is distributed under the MIT License.
## Data Collection
The data was collected from the BBC Urdu website using web scraping techniques. The articles were published between 2003 and 2020, covering a wide range of topics such as politics, sports, technology, and entertainment.
## Data Preprocessing
The text data was preprocessed to remove any HTML tags and non-Urdu characters. The summaries were created by human annotators, who read the full text of the articles and summarized the main points. The dataset was split into training, validation, and test sets, with 80%, 10%, and 10% of the data in each set respectively.
## Potential Use Cases
This dataset can be used for training and evaluating models for automatic summarization of Urdu text. It can also be used for research in natural language processing, machine learning, and information retrieval.
## Acknowledgements
I thank the BBC Urdu team for publishing the news articles on their website and making them publicly available. We also thank the human annotators who created the summaries for the articles.
## Relevant Papers
No papers have been published yet using this dataset.
## License
The dataset is distributed under the MIT License. | [
"# Urdu Summarization",
"## Dataset Overview\nThe Urdu Summarization dataset contains news articles in Urdu language along with their summaries. The dataset contains a total of 48,071 news articles collected from the BBC Urdu website. Each article is labeled with its headline, summary, and full text.",
"## Dataset Details\nThe dataset contains the following columns:\n\n- id (string): Unique identifier for each article\n- url (string): URL for the original article\n- title (string): Headline of the article\n- summary (string): Summary of the article\n- text (string): Full text of the article\nThe dataset is distributed under the MIT License.",
"## Data Collection\nThe data was collected from the BBC Urdu website using web scraping techniques. The articles were published between 2003 and 2020, covering a wide range of topics such as politics, sports, technology, and entertainment.",
"## Data Preprocessing\nThe text data was preprocessed to remove any HTML tags and non-Urdu characters. The summaries were created by human annotators, who read the full text of the articles and summarized the main points. The dataset was split into training, validation, and test sets, with 80%, 10%, and 10% of the data in each set respectively.",
"## Potential Use Cases\nThis dataset can be used for training and evaluating models for automatic summarization of Urdu text. It can also be used for research in natural language processing, machine learning, and information retrieval.",
"## Acknowledgements\nI thank the BBC Urdu team for publishing the news articles on their website and making them publicly available. We also thank the human annotators who created the summaries for the articles.",
"## Relevant Papers\nNo papers have been published yet using this dataset.",
"## License\nThe dataset is distributed under the MIT License."
] | [
"TAGS\n#task_categories-summarization #task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-Urdu #license-mit #region-us \n",
"# Urdu Summarization",
"## Dataset Overview\nThe Urdu Summarization dataset contains news articles in Urdu language along with their summaries. The dataset contains a total of 48,071 news articles collected from the BBC Urdu website. Each article is labeled with its headline, summary, and full text.",
"## Dataset Details\nThe dataset contains the following columns:\n\n- id (string): Unique identifier for each article\n- url (string): URL for the original article\n- title (string): Headline of the article\n- summary (string): Summary of the article\n- text (string): Full text of the article\nThe dataset is distributed under the MIT License.",
"## Data Collection\nThe data was collected from the BBC Urdu website using web scraping techniques. The articles were published between 2003 and 2020, covering a wide range of topics such as politics, sports, technology, and entertainment.",
"## Data Preprocessing\nThe text data was preprocessed to remove any HTML tags and non-Urdu characters. The summaries were created by human annotators, who read the full text of the articles and summarized the main points. The dataset was split into training, validation, and test sets, with 80%, 10%, and 10% of the data in each set respectively.",
"## Potential Use Cases\nThis dataset can be used for training and evaluating models for automatic summarization of Urdu text. It can also be used for research in natural language processing, machine learning, and information retrieval.",
"## Acknowledgements\nI thank the BBC Urdu team for publishing the news articles on their website and making them publicly available. We also thank the human annotators who created the summaries for the articles.",
"## Relevant Papers\nNo papers have been published yet using this dataset.",
"## License\nThe dataset is distributed under the MIT License."
] |
cdd07f2970e393b42b3a1a7b5c4b24fd11737a98 |
# Wikipedia (es) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (es)](https://es.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-es-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-es-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:es",
"license:apache-2.0",
"region:us"
] | 2023-01-14T12:01:41+00:00 | {"annotations_creators": ["expert-generated"], "language": ["es"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:53:23+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-Spanish #license-apache-2.0 #region-us
|
# Wikipedia (es) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (es) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (es) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (es) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-Spanish #license-apache-2.0 #region-us \n",
"# Wikipedia (es) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (es) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
0c69f7a4cdd1de8d61250bc9f66e317dce589bfc | # Dataset Card for "lesion_dataset_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pranav456/lesion_dataset_1 | [
"region:us"
] | 2023-01-14T12:07:57+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AK", "1": "BCC", "2": "BKL", "3": "DF", "4": "MEL", "5": "NV", "6": "SCC", "7": "VASC"}}}}], "splits": [{"name": "train", "num_bytes": 105488287.136, "num_examples": 17728}, {"name": "test", "num_bytes": 29225882.496, "num_examples": 5062}, {"name": "validation", "num_bytes": 15175816.112, "num_examples": 2541}], "download_size": 142659177, "dataset_size": 149889985.744}} | 2023-01-14T12:08:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lesion_dataset_1"
More Information needed | [
"# Dataset Card for \"lesion_dataset_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lesion_dataset_1\"\n\nMore Information needed"
] |
c42ce8e80187380e25cfe7fb7a4ef049cf22bf86 |
<div align="center">
<img width="640" alt="fcakyon/crack-instance-segmentation" src="https://huggingface.co/datasets/fcakyon/crack-instance-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['cracks-and-spalling', 'object']
```
### Number of Images
```json
{'valid': 73, 'test': 37, 'train': 323}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("fcakyon/crack-instance-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment/dataset/5](https://universe.roboflow.com/palmdetection-1cjxw/crack_detection_experiment/dataset/5?ref=roboflow2huggingface)
### Citation
```
@misc{ 400-img_dataset,
title = { 400 img Dataset },
type = { Open Source Dataset },
author = { Master dissertation },
howpublished = { \\url{ https://universe.roboflow.com/master-dissertation/400-img } },
url = { https://universe.roboflow.com/master-dissertation/400-img },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 14, 2023 at 10:08 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 433 images.
Crack-spall are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| fcakyon/crack-instance-segmentation | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"region:us"
] | 2023-01-14T12:18:16+00:00 | {"task_categories": ["image-segmentation"], "tags": ["roboflow", "roboflow2huggingface"]} | 2023-01-14T13:08:27+00:00 | [] | [] | TAGS
#task_categories-image-segmentation #roboflow #roboflow2huggingface #region-us
|
<div align="center">
<img width="640" alt="fcakyon/crack-instance-segmentation" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on January 14, 2023 at 10:08 AM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 433 images.
Crack-spall are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 14, 2023 at 10:08 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 433 images.\nCrack-spall are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied."
] | [
"TAGS\n#task_categories-image-segmentation #roboflow #roboflow2huggingface #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 14, 2023 at 10:08 AM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 433 images.\nCrack-spall are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied."
] |
b004b6a7be5f2bee712f8763e42b0aeadf19d586 |
<div align="center">
<img width="640" alt="fcakyon/pokemon-classification" src="https://huggingface.co/datasets/fcakyon/pokemon-classification/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['Golbat', 'Machoke', 'Omastar', 'Diglett', 'Lapras', 'Kabuto', 'Persian', 'Weepinbell', 'Golem', 'Dodrio', 'Raichu', 'Zapdos', 'Raticate', 'Magnemite', 'Ivysaur', 'Growlithe', 'Tangela', 'Drowzee', 'Rapidash', 'Venonat', 'Pidgeot', 'Nidorino', 'Porygon', 'Lickitung', 'Rattata', 'Machop', 'Charmeleon', 'Slowbro', 'Parasect', 'Eevee', 'Starmie', 'Staryu', 'Psyduck', 'Dragonair', 'Magikarp', 'Vileplume', 'Marowak', 'Pidgeotto', 'Shellder', 'Mewtwo', 'Farfetchd', 'Kingler', 'Seel', 'Kakuna', 'Doduo', 'Electabuzz', 'Charmander', 'Rhyhorn', 'Tauros', 'Dugtrio', 'Poliwrath', 'Gengar', 'Exeggutor', 'Dewgong', 'Jigglypuff', 'Geodude', 'Kadabra', 'Nidorina', 'Sandshrew', 'Grimer', 'MrMime', 'Pidgey', 'Koffing', 'Ekans', 'Alolan Sandslash', 'Venusaur', 'Snorlax', 'Paras', 'Jynx', 'Chansey', 'Hitmonchan', 'Gastly', 'Kangaskhan', 'Oddish', 'Wigglytuff', 'Graveler', 'Arcanine', 'Clefairy', 'Articuno', 'Poliwag', 'Abra', 'Squirtle', 'Voltorb', 'Ponyta', 'Moltres', 'Nidoqueen', 'Magmar', 'Onix', 'Vulpix', 'Butterfree', 'Krabby', 'Arbok', 'Clefable', 'Goldeen', 'Magneton', 'Dratini', 'Caterpie', 'Jolteon', 'Nidoking', 'Alakazam', 'Dragonite', 'Fearow', 'Slowpoke', 'Weezing', 'Beedrill', 'Weedle', 'Cloyster', 'Vaporeon', 'Gyarados', 'Golduck', 'Machamp', 'Hitmonlee', 'Primeape', 'Cubone', 'Sandslash', 'Scyther', 'Haunter', 'Metapod', 'Tentacruel', 'Aerodactyl', 'Kabutops', 'Ninetales', 'Zubat', 'Rhydon', 'Mew', 'Pinsir', 'Ditto', 'Victreebel', 'Omanyte', 'Horsea', 'Pikachu', 'Blastoise', 'Venomoth', 'Charizard', 'Seadra', 'Muk', 'Spearow', 'Bulbasaur', 'Bellsprout', 'Electrode', 'Gloom', 'Poliwhirl', 'Flareon', 'Seaking', 'Hypno', 'Wartortle', 'Mankey', 'Tentacool', 'Exeggcute', 'Meowth']
```
### Number of Images
```json
{'train': 4869, 'test': 732, 'valid': 1390}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("fcakyon/pokemon-classification", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14](https://universe.roboflow.com/robert-demo-qvail/pokedex/dataset/14?ref=roboflow2huggingface)
### Citation
```
@misc{ pokedex_dataset,
title = { Pokedex Dataset },
type = { Open Source Dataset },
author = { Lance Zhang },
howpublished = { \\url{ https://universe.roboflow.com/robert-demo-qvail/pokedex } },
url = { https://universe.roboflow.com/robert-demo-qvail/pokedex },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2022 },
month = { dec },
note = { visited on 2023-01-14 },
}
```
### License
Public Domain
### Dataset Summary
This dataset was exported via roboflow.com on December 20, 2022 at 5:34 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 6991 images.
Pokemon are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 224x224 (Fit (black edges))
No image augmentation techniques were applied.
| fcakyon/pokemon-classification | [
"task_categories:image-classification",
"roboflow",
"roboflow2huggingface",
"Gaming",
"region:us"
] | 2023-01-14T12:47:57+00:00 | {"task_categories": ["image-classification"], "tags": ["roboflow", "roboflow2huggingface", "Gaming"]} | 2023-01-14T13:06:55+00:00 | [] | [] | TAGS
#task_categories-image-classification #roboflow #roboflow2huggingface #Gaming #region-us
|
<div align="center">
<img width="640" alt="fcakyon/pokemon-classification" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
Public Domain
### Dataset Summary
This dataset was exported via URL on December 20, 2022 at 5:34 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
It includes 6991 images.
Pokemon are annotated in folder format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 224x224 (Fit (black edges))
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nPublic Domain",
"### Dataset Summary\nThis dataset was exported via URL on December 20, 2022 at 5:34 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 6991 images.\nPokemon are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 224x224 (Fit (black edges))\n\nNo image augmentation techniques were applied."
] | [
"TAGS\n#task_categories-image-classification #roboflow #roboflow2huggingface #Gaming #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nPublic Domain",
"### Dataset Summary\nThis dataset was exported via URL on December 20, 2022 at 5:34 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nIt includes 6991 images.\nPokemon are annotated in folder format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 224x224 (Fit (black edges))\n\nNo image augmentation techniques were applied."
] |
00235ee6a2cf9f43f6576327257783bcbcb1f3e2 |
# Wikipedia (fr) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (fr)](https://fr.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-fr-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:fr",
"license:apache-2.0",
"region:us"
] | 2023-01-14T13:09:16+00:00 | {"annotations_creators": ["expert-generated"], "language": ["fr"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:53:41+00:00 | [] | [
"fr"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-French #license-apache-2.0 #region-us
|
# Wikipedia (fr) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (fr) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (fr) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (fr) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-French #license-apache-2.0 #region-us \n",
"# Wikipedia (fr) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (fr) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
8095c7585f73e5419fbe6ee0fc59b7871a249d78 | # AutoTrain Dataset for project: books-rating-analysis
## Dataset Description
This dataset has been automatically processed by AutoTrain for project books-rating-analysis.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"feat_Unnamed: 0": 1976,
"feat_user_id": "792500e85277fa7ada535de23e7eb4c3",
"feat_book_id": 18243288,
"feat_review_id": "7f8219233a62bde2973ddd118e8162e2",
"target": 2,
"text": "This book is kind of tricky. It is pleasingly written stylistically and it's an easy read so I cruised along on the momentum of the smooth prose and the potential of what this book could have and should have been for a while before I realized that it is hollow and aimless. \n This is a book where the extraordinary is deliberately made mundane for some reason and characters are stubbornly underdeveloped. It is as if all the drama has been removed from this story, leaving a bloodless collection of 19th industrial factoids sprinkled amidst a bunch of ciphers enduring an oddly dull series of tragedies. \n Mildly entertaining for a while but ultimately unsatisfactory.",
"feat_date_added": "Mon Apr 27 11:37:36 -0700 2015",
"feat_date_updated": "Mon May 04 08:50:42 -0700 2015",
"feat_read_at": "Mon May 04 08:50:42 -0700 2015",
"feat_started_at": "Mon Apr 27 00:00:00 -0700 2015",
"feat_n_votes": 0,
"feat_n_comments": 0
},
{
"feat_Unnamed: 0": 523,
"feat_user_id": "01ec1a320ffded6b2dd47833f2c8e4fb",
"feat_book_id": 18220354,
"feat_review_id": "c19543fab6b2386df92c1a9ba3cf6e6b",
"target": 4,
"text": "4.5 stars!! I am always intrigued to read a novel written from a male POV. I am equally fascinated by pen names, and even when the writer professes to be one gender or the other (or leaves it open to the imagination such as BG Harlen), I still wonder at the back of my mind whether the author is a male or female. Do some female writers have a decidedly masculine POV? Yes, there are several that come to mind. Do some male writers have a feminine \"flavor\" to their writing? It seems so. \n And so we come to the fascinating Thou Shalt Not. I loved Luke's story, as well as JJ Rossum's writing style, and don't want to be pigeon-holed into thinking that the author is male or female. That's just me. Either way, it's a very sexy and engaging book with plenty of steamy scenes to satisfy even the most jaded erotic romance reader (such as myself). The story carries some very weighty themes (domestic violence, adultery, the nature of beauty), but the book is very fast-paced and satisfying. Will Luke keep himself out of trouble with April? Will he learn to really love someone again? No spoilers here, but the author answers these questions while exploring what qualities are really important and what makes someone worthy of love. \n This book has a very interesting conclusion that some readers will love, and some might find a little challenging. I loved it and can't wait to read more from this author. \n *ARC provided by the author in exchange for an honest review.",
"feat_date_added": "Mon Jul 29 16:04:04 -0700 2013",
"feat_date_updated": "Thu Dec 12 21:43:54 -0800 2013",
"feat_read_at": "Fri Dec 06 00:00:00 -0800 2013",
"feat_started_at": "Thu Dec 05 00:00:00 -0800 2013",
"feat_n_votes": 10,
"feat_n_comments": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"feat_Unnamed: 0": "Value(dtype='int64', id=None)",
"feat_user_id": "Value(dtype='string', id=None)",
"feat_book_id": "Value(dtype='int64', id=None)",
"feat_review_id": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1', '2', '3', '4', '5'], id=None)",
"text": "Value(dtype='string', id=None)",
"feat_date_added": "Value(dtype='string', id=None)",
"feat_date_updated": "Value(dtype='string', id=None)",
"feat_read_at": "Value(dtype='string', id=None)",
"feat_started_at": "Value(dtype='string', id=None)",
"feat_n_votes": "Value(dtype='int64', id=None)",
"feat_n_comments": "Value(dtype='int64', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2397 |
| valid | 603 |
| LewisShanghai/autotrain-data-books-rating-analysis | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2023-01-14T13:27:44+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2023-01-14T14:31:43+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #region-us
| AutoTrain Dataset for project: books-rating-analysis
====================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project books-rating-analysis.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
5c5caf5f55c2eccc555f62fda2b111c408104e0a |
# Wikipedia (de) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (de)](https://de.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-de-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-de-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:de",
"license:apache-2.0",
"region:us"
] | 2023-01-14T13:41:14+00:00 | {"annotations_creators": ["expert-generated"], "language": ["de"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:52:49+00:00 | [] | [
"de"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-German #license-apache-2.0 #region-us
|
# Wikipedia (de) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (de) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (de) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (de) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-German #license-apache-2.0 #region-us \n",
"# Wikipedia (de) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (de) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
7e99c707e2a35bf9926057f74f0e07d5c3df54dd | # Dataset Card for "Patents_Green_Plastics"
number of rows: 11.196
features: [title, label]
label: 0, 1
The dataset contains patent abstracts that are labeled as 1 (="Green Plastics") and 0 (="Not Green Plastics").
# Dataset Creation
The [BIGPATENT](https://huggingface.co/datasets/big_patent) dataset is the source for this dataset.
In a first step, abstracts of BIGPATENT were filtered by the terms "plastics" and "polymer". The resulting "Plastics" dataset contained 64.372 samples.
In a second step, the 64.372 samples were filtered by terms which define "green plastics".
"Green Plastics" are defined by the list of terms:
"degrada", "recycl", "bio", "compost", "bact", "waste recovery", "zero waste", "sustainab", "Bio-Based", "Bio-Degradable", "Renewable", "Green Plastics", "Renewable", "Degradable", "Compostable", "Bio-resorbable", "Bio-soluble", "Cellulose", "Biodegradable","Mycelium", "Recyclability", "Degradability", "Bio-Polymer", "reuse", "reusable", "reusing", "Degradation", "Multiple Use", "Bioplastic", "Polyhydroxyalkanoates", "PHA", "Polylactide", "PLA", "Polyglycolide", "PGA"
(some terms might repeat)
The group of "Green Plastics" containing 5.598 rows was labeled as 1.
An equal amount of samples (=5.598 rows) was randomly chosen from the "Plastics" dataset, defined as "Not Green Plastics" and labeled as 0.
Both groups ("Green Plastics" and "Not Green Plastics") were merged together. | cwinkler/patents_green_plastics | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2023-01-14T14:25:09+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "dataset_info": {"features": [{"name": "abstract", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 8088461, "num_examples": 11196}], "download_size": 4025753, "dataset_size": 8088461}} | 2023-01-16T09:50:06+00:00 | [] | [
"en"
] | TAGS
#size_categories-10K<n<100K #language-English #region-us
| # Dataset Card for "Patents_Green_Plastics"
number of rows: 11.196
features: [title, label]
label: 0, 1
The dataset contains patent abstracts that are labeled as 1 (="Green Plastics") and 0 (="Not Green Plastics").
# Dataset Creation
The BIGPATENT dataset is the source for this dataset.
In a first step, abstracts of BIGPATENT were filtered by the terms "plastics" and "polymer". The resulting "Plastics" dataset contained 64.372 samples.
In a second step, the 64.372 samples were filtered by terms which define "green plastics".
"Green Plastics" are defined by the list of terms:
"degrada", "recycl", "bio", "compost", "bact", "waste recovery", "zero waste", "sustainab", "Bio-Based", "Bio-Degradable", "Renewable", "Green Plastics", "Renewable", "Degradable", "Compostable", "Bio-resorbable", "Bio-soluble", "Cellulose", "Biodegradable","Mycelium", "Recyclability", "Degradability", "Bio-Polymer", "reuse", "reusable", "reusing", "Degradation", "Multiple Use", "Bioplastic", "Polyhydroxyalkanoates", "PHA", "Polylactide", "PLA", "Polyglycolide", "PGA"
(some terms might repeat)
The group of "Green Plastics" containing 5.598 rows was labeled as 1.
An equal amount of samples (=5.598 rows) was randomly chosen from the "Plastics" dataset, defined as "Not Green Plastics" and labeled as 0.
Both groups ("Green Plastics" and "Not Green Plastics") were merged together. | [
"# Dataset Card for \"Patents_Green_Plastics\"\n\n number of rows: 11.196\n features: [title, label]\n label: 0, 1\n\nThe dataset contains patent abstracts that are labeled as 1 (=\"Green Plastics\") and 0 (=\"Not Green Plastics\").",
"# Dataset Creation\n\nThe BIGPATENT dataset is the source for this dataset.\n\nIn a first step, abstracts of BIGPATENT were filtered by the terms \"plastics\" and \"polymer\". The resulting \"Plastics\" dataset contained 64.372 samples.\n\nIn a second step, the 64.372 samples were filtered by terms which define \"green plastics\". \n\n\"Green Plastics\" are defined by the list of terms: \n\"degrada\", \"recycl\", \"bio\", \"compost\", \"bact\", \"waste recovery\", \"zero waste\", \"sustainab\", \"Bio-Based\", \"Bio-Degradable\", \"Renewable\", \"Green Plastics\", \"Renewable\", \"Degradable\", \"Compostable\", \"Bio-resorbable\", \"Bio-soluble\", \"Cellulose\", \"Biodegradable\",\"Mycelium\", \"Recyclability\", \"Degradability\", \"Bio-Polymer\", \"reuse\", \"reusable\", \"reusing\", \"Degradation\", \"Multiple Use\", \"Bioplastic\", \"Polyhydroxyalkanoates\", \"PHA\", \"Polylactide\", \"PLA\", \"Polyglycolide\", \"PGA\"\n(some terms might repeat)\n\nThe group of \"Green Plastics\" containing 5.598 rows was labeled as 1. \n\nAn equal amount of samples (=5.598 rows) was randomly chosen from the \"Plastics\" dataset, defined as \"Not Green Plastics\" and labeled as 0. \n\nBoth groups (\"Green Plastics\" and \"Not Green Plastics\") were merged together."
] | [
"TAGS\n#size_categories-10K<n<100K #language-English #region-us \n",
"# Dataset Card for \"Patents_Green_Plastics\"\n\n number of rows: 11.196\n features: [title, label]\n label: 0, 1\n\nThe dataset contains patent abstracts that are labeled as 1 (=\"Green Plastics\") and 0 (=\"Not Green Plastics\").",
"# Dataset Creation\n\nThe BIGPATENT dataset is the source for this dataset.\n\nIn a first step, abstracts of BIGPATENT were filtered by the terms \"plastics\" and \"polymer\". The resulting \"Plastics\" dataset contained 64.372 samples.\n\nIn a second step, the 64.372 samples were filtered by terms which define \"green plastics\". \n\n\"Green Plastics\" are defined by the list of terms: \n\"degrada\", \"recycl\", \"bio\", \"compost\", \"bact\", \"waste recovery\", \"zero waste\", \"sustainab\", \"Bio-Based\", \"Bio-Degradable\", \"Renewable\", \"Green Plastics\", \"Renewable\", \"Degradable\", \"Compostable\", \"Bio-resorbable\", \"Bio-soluble\", \"Cellulose\", \"Biodegradable\",\"Mycelium\", \"Recyclability\", \"Degradability\", \"Bio-Polymer\", \"reuse\", \"reusable\", \"reusing\", \"Degradation\", \"Multiple Use\", \"Bioplastic\", \"Polyhydroxyalkanoates\", \"PHA\", \"Polylactide\", \"PLA\", \"Polyglycolide\", \"PGA\"\n(some terms might repeat)\n\nThe group of \"Green Plastics\" containing 5.598 rows was labeled as 1. \n\nAn equal amount of samples (=5.598 rows) was randomly chosen from the \"Plastics\" dataset, defined as \"Not Green Plastics\" and labeled as 0. \n\nBoth groups (\"Green Plastics\" and \"Not Green Plastics\") were merged together."
] |
d7eb782e625634e2d5f086e74d3724d10984209c | # Dataset Card for "PickaPic-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/PickaPic-images | [
"region:us"
] | 2023-01-14T14:40:41+00:00 | {"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "image_uid", "dtype": "string"}, {"name": "user_id", "dtype": "int64"}, {"name": "prompt", "dtype": "string"}, {"name": "negative_prompt", "dtype": "string"}, {"name": "seed", "dtype": "int64"}, {"name": "gs", "dtype": "float64"}, {"name": "steps", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "num_generated", "dtype": "int64"}, {"name": "scheduler_cls", "dtype": "string"}, {"name": "model_id", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 70620168, "num_examples": 109356}], "download_size": 12059565, "dataset_size": 70620168}} | 2023-02-05T11:27:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "PickaPic-images"
More Information needed | [
"# Dataset Card for \"PickaPic-images\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"PickaPic-images\"\n\nMore Information needed"
] |
6627c83e298369e8d4fe25ed6ae7afd75ba978e3 | # Dataset Card for "PickaPic-rankings"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/PickaPic-rankings | [
"region:us"
] | 2023-01-14T14:45:16+00:00 | {"dataset_info": {"features": [{"name": "ranking_id", "dtype": "int64"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "user_id", "dtype": "int64"}, {"name": "image_1_uid", "dtype": "string"}, {"name": "image_2_uid", "dtype": "string"}, {"name": "image_3_uid", "dtype": "string"}, {"name": "image_4_uid", "dtype": "string"}, {"name": "best_image_uid", "dtype": "string"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7758101, "num_examples": 25355}], "download_size": 3973871, "dataset_size": 7758101}} | 2023-02-05T11:26:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "PickaPic-rankings"
More Information needed | [
"# Dataset Card for \"PickaPic-rankings\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"PickaPic-rankings\"\n\nMore Information needed"
] |
bf0b97abe1dc52fad9e9852045ad5186de7ce459 | # Dataset Card for "PickaPic-downloads"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/PickaPic-downloads | [
"region:us"
] | 2023-01-14T14:54:01+00:00 | {"dataset_info": {"features": [{"name": "download_id", "dtype": "int64"}, {"name": "created_at", "dtype": "timestamp[ns]"}, {"name": "user_id", "dtype": "int64"}, {"name": "image_uid", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 734763, "num_examples": 2512}], "download_size": 299901, "dataset_size": 734763}} | 2023-02-05T11:26:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "PickaPic-downloads"
More Information needed | [
"# Dataset Card for \"PickaPic-downloads\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"PickaPic-downloads\"\n\nMore Information needed"
] |
2341cea6d281fd00f95eb7a94b1c2cf19a5fef78 | # Dataset Card for "mec-punctuation-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tiagoblima/punctuation-mec-bert | [
"region:us"
] | 2023-01-14T15:03:34+00:00 | {"dataset_info": {"features": [{"name": "tag", "dtype": "string"}, {"name": "sent_id", "dtype": "int64"}, {"name": "text_id", "dtype": "int64"}, {"name": "sent_text", "dtype": "string"}, {"name": "tokens", "sequence": "string"}, {"name": "labels", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1075373, "num_examples": 2168}], "download_size": 313037, "dataset_size": 1075373}} | 2023-02-22T23:43:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "mec-punctuation-v2"
More Information needed | [
"# Dataset Card for \"mec-punctuation-v2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"mec-punctuation-v2\"\n\nMore Information needed"
] |
42017367613a0984f2c66415150232aa107aff8f |
Trained on 29 N/SFW Yor Forger images but don't Worry! The SFW will work unexpectedly good! | SatyamSSJ10/YorForger | [
"task_categories:image-to-text",
"size_categories:n<1K",
"license:openrail",
"region:us"
] | 2023-01-14T15:04:07+00:00 | {"license": "openrail", "size_categories": ["n<1K"], "task_categories": ["image-to-text"], "pretty_name": "YorForger"} | 2023-01-14T15:11:06+00:00 | [] | [] | TAGS
#task_categories-image-to-text #size_categories-n<1K #license-openrail #region-us
|
Trained on 29 N/SFW Yor Forger images but don't Worry! The SFW will work unexpectedly good! | [] | [
"TAGS\n#task_categories-image-to-text #size_categories-n<1K #license-openrail #region-us \n"
] |
d2a400d6b9333941ba7633f1726fbe862b63691c | # Dataset Card for "biggest_ideas_metadata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bejaeger/biggest_ideas_metadata | [
"region:us"
] | 2023-01-14T15:49:16+00:00 | {"dataset_info": {"features": [{"name": "videoId", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "channelId", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "publishedAt", "dtype": "string"}, {"name": "likes", "dtype": "string"}, {"name": "views", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 58734, "num_examples": 48}], "download_size": 25139, "dataset_size": 58734}} | 2023-01-14T15:49:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "biggest_ideas_metadata"
More Information needed | [
"# Dataset Card for \"biggest_ideas_metadata\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"biggest_ideas_metadata\"\n\nMore Information needed"
] |
683e03dc302a4ea2c583457e0451f934a358ba7d | # Dataset Card for "biggest_ideas_transcriptions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | bejaeger/biggest_ideas_transcriptions | [
"region:us"
] | 2023-01-14T17:26:21+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "published", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "videoId", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 8704046, "num_examples": 32983}], "download_size": 2443020, "dataset_size": 8704046}} | 2023-02-09T05:45:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "biggest_ideas_transcriptions"
More Information needed | [
"# Dataset Card for \"biggest_ideas_transcriptions\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"biggest_ideas_transcriptions\"\n\nMore Information needed"
] |
fa2b2715172a3422e3fb8cdb79902d35ec416aec | # Dataset Card for "cartoon-blip-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | keron5671/cartoon-blip-captions | [
"region:us"
] | 2023-01-14T18:35:04+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 722303.0, "num_examples": 17}], "download_size": 717339, "dataset_size": 722303.0}} | 2023-01-14T18:35:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "cartoon-blip-captions"
More Information needed | [
"# Dataset Card for \"cartoon-blip-captions\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"cartoon-blip-captions\"\n\nMore Information needed"
] |
bc94bd1238bbc0d02471ad346b2457b441643e81 | # Dataset Card for "dreambooth-hackathon-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tema7707/dreambooth-hackathon-images | [
"region:us"
] | 2023-01-14T19:38:45+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 14147658.0, "num_examples": 50}], "download_size": 0, "dataset_size": 14147658.0}} | 2023-01-14T21:09:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dreambooth-hackathon-images"
More Information needed | [
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dreambooth-hackathon-images\"\n\nMore Information needed"
] |
85c2eca83d4b9dcecc043c23748cb8c1047f683f |
# Wikipedia (en) embedded with cohere.ai `multilingual-22-12` encoder
We encoded [Wikipedia (en)](https://en.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Embeddings
We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/).
## Further languages
We provide embeddings of Wikipedia in many different languages:
[ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings),
You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12).
## Loading the dataset
You can either load the dataset like this:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-en-embeddings", split="train")
```
Or you can also stream it without downloading it before:
```python
from datasets import load_dataset
docs = load_dataset(f"Cohere/wikipedia-22-12-en-embeddings", split="train", streaming=True)
for doc in docs:
docid = doc['id']
title = doc['title']
text = doc['text']
emb = doc['emb']
```
## Search
A full search example:
```python
#Run: pip install cohere datasets
from datasets import load_dataset
import torch
import cohere
co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com
#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset(f"Cohere/wikipedia-22-12-en-embeddings", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
docs.append(doc)
doc_embeddings.append(doc['emb'])
if len(docs) >= max_docs:
break
doc_embeddings = torch.tensor(doc_embeddings)
query = 'Who founded Youtube'
response = co.embed(texts=[query], model='multilingual-22-12')
query_embedding = response.embeddings
query_embedding = torch.tensor(query_embedding)
# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)
# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
print(docs[doc_id]['title'])
print(docs[doc_id]['text'], "\n")
```
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance) | Cohere/wikipedia-22-12-en-embeddings | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-01-14T20:36:11+00:00 | {"annotations_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": [], "source_datasets": [], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "tags": []} | 2023-03-22T16:51:57+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-English #license-apache-2.0 #region-us
|
# Wikipedia (en) embedded with URL 'multilingual-22-12' encoder
We encoded Wikipedia (en) using the URL 'multilingual-22-12' embedding model.
To get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.
## Embeddings
We compute for 'title+" "+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.
## Further languages
We provide embeddings of Wikipedia in many different languages:
ar, de, en, es, fr, hi, it, ja, ko, simple english, zh,
You can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.
## Loading the dataset
You can either load the dataset like this:
Or you can also stream it without downloading it before:
## Search
A full search example:
## Performance
You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance | [
"# Wikipedia (en) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (en) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-expert-generated #multilinguality-multilingual #language-English #license-apache-2.0 #region-us \n",
"# Wikipedia (en) embedded with URL 'multilingual-22-12' encoder\n\nWe encoded Wikipedia (en) using the URL 'multilingual-22-12' embedding model.\n\nTo get an overview how this dataset was created and pre-processed, have a look at Cohere/wikipedia-22-12.",
"## Embeddings\nWe compute for 'title+\" \"+text' the embeddings using our 'multilingual-22-12' embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at URL multilingual embedding model.",
"## Further languages\nWe provide embeddings of Wikipedia in many different languages:\nar, de, en, es, fr, hi, it, ja, ko, simple english, zh,\n\nYou can find the Wikipedia datasets without embeddings at Cohere/wikipedia-22-12.",
"## Loading the dataset\nYou can either load the dataset like this:\n\n\nOr you can also stream it without downloading it before:",
"## Search\nA full search example:",
"## Performance\nYou can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: miracl-en-queries-22-12#performance"
] |
10dbc09876db4ee50a9e54051425ee343b1ae5c4 | # Dataset Card for "raven"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jkwiatkowski/raven | [
"region:us"
] | 2023-01-14T21:25:46+00:00 | {"dataset_info": {"features": [{"name": "inputs", "dtype": {"array3_d": {"shape": [16, 160, 160], "dtype": "uint8"}}}, {"name": "target", "dtype": {"array2_d": {"shape": [16, 113], "dtype": "int8"}}}, {"name": "index", "dtype": "uint8"}], "splits": [{"name": "train", "num_bytes": 17714970000, "num_examples": 42000}, {"name": "val", "num_bytes": 5904990000, "num_examples": 14000}, {"name": "test", "num_bytes": 5904990000, "num_examples": 14000}], "download_size": 1225465267, "dataset_size": 29524950000}} | 2023-01-14T21:40:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "raven"
More Information needed | [
"# Dataset Card for \"raven\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"raven\"\n\nMore Information needed"
] |
c574708ad8844fdd043c6c30917b6da8699f0a89 | # Dataset Card for "thaigov-radio-audio"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | napatswift/thaigov-radio-audio | [
"region:us"
] | 2023-01-15T05:02:59+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 828772851.0, "num_examples": 426}], "download_size": 824527615, "dataset_size": 828772851.0}} | 2023-01-15T05:05:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "thaigov-radio-audio"
More Information needed | [
"# Dataset Card for \"thaigov-radio-audio\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"thaigov-radio-audio\"\n\nMore Information needed"
] |
1c1aa4ed8622db18916d912afaaaf60a8dca9775 | # Dataset Card for "copy_dataset_competitors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanjin7/copy_dataset_competitors | [
"region:us"
] | 2023-01-15T05:49:49+00:00 | {"dataset_info": {"features": [{"name": "shop_id", "dtype": "int64"}, {"name": "ad_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 691250, "num_examples": 2884}], "download_size": 421475, "dataset_size": 691250}} | 2023-01-16T16:28:43+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "copy_dataset_competitors"
More Information needed | [
"# Dataset Card for \"copy_dataset_competitors\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"copy_dataset_competitors\"\n\nMore Information needed"
] |
5d7f1aaf95bf2599fecc65def3461765ad9e9200 | # Dataset Card for "copy_dataset_untrimmed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanjin7/copy_dataset_untrimmed | [
"region:us"
] | 2023-01-15T06:00:58+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28610253, "num_examples": 84352}], "download_size": 0, "dataset_size": 28610253}} | 2023-01-16T16:31:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "copy_dataset_untrimmed"
More Information needed | [
"# Dataset Card for \"copy_dataset_untrimmed\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"copy_dataset_untrimmed\"\n\nMore Information needed"
] |
7d30d46f6097b46067c7f316457eccd8cf834054 |
<div style='background: #ffeec0; border: 1px solid #ffd86d; padding:1em; border-radius:3px;'>
<h3 style='margin:0'>Outdated!</h3>
<p style='margin:0'>This dataset has been superseded by:</p>
<p style='margin:0'><a style="font-size: 2em;" href='https://huggingface.co/datasets/hearmeneigh/e621-rising-v3-curated'>E621 Rising V3 Curated Image Dataset</a></p>
</div>
**Warning: THIS dataset is NOT suitable for use by minors. The dataset contains X-rated/NFSW content.**
# E621 Rising: Curated Image Dataset v1
**441,623** images (~200GB) downloaded from `e621.net` with [tags](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-curated/raw/main/meta/tag-counts.json).
This is a curated dataset, picked from the E621 Rising: Raw Image Dataset v1 [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-raw).
## Image Processing
* Only `jpg` and `png` images were considered
* Image width and height have been clamped to `(0, 4096]px`; larger images have been resized to meet the limit
* Alpha channels have been removed
* All images have been converted to `jpg` format
* All images have been converted to TrueColor `RGB`
* All images have been verified to load with `Pillow`
* Metadata from E621 is [available here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-raw/tree/main/meta)
## Tags
For a comprehensive list of tags and counts, [see here](https://huggingface.co/datasets/hearmeneigh/e621-rising-v1-curated/raw/main/meta/tag-counts.json).
### Changes From E621
* Symbols have been prefixed with `symbol:`, e.g. `symbol:<3`
* Aspect ratio has been prefixed with `aspect_ratio:`, e.g. `aspect_ratio:16_9`
* All categories except `general` have been prefixed with the category name, e.g. `artist:somename`. The categories are:
* `artist`
* `copyright`
* `character`
* `species`
* `invalid`
* `meta`
* `lore`
### Additional Tags
* Image rating
* `rating:explicit`
* `rating:questionable`
* `rating:safe` | hearmeneigh/e621-rising-v1-curated | [
"size_categories:100K<n<1M",
"not-for-all-audiences",
"region:us"
] | 2023-01-15T06:11:18+00:00 | {"size_categories": ["100K<n<1M"], "pretty_name": "E621 Rising: Curated Image Dataset v1", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192529551170.037, "num_examples": 441623}], "download_size": 190109066617, "dataset_size": 192529551170.037}, "viewer": false, "tags": ["not-for-all-audiences"]} | 2023-10-09T17:56:31+00:00 | [] | [] | TAGS
#size_categories-100K<n<1M #not-for-all-audiences #region-us
|
<div style='background: #ffeec0; border: 1px solid #ffd86d; padding:1em; border-radius:3px;'>
<h3 style='margin:0'>Outdated!</h3>
<p style='margin:0'>This dataset has been superseded by:</p>
<p style='margin:0'><a style="font-size: 2em;" href='URL Rising V3 Curated Image Dataset</a></p>
</div>
Warning: THIS dataset is NOT suitable for use by minors. The dataset contains X-rated/NFSW content.
# E621 Rising: Curated Image Dataset v1
441,623 images (~200GB) downloaded from 'URL' with tags.
This is a curated dataset, picked from the E621 Rising: Raw Image Dataset v1 available here.
## Image Processing
* Only 'jpg' and 'png' images were considered
* Image width and height have been clamped to '(0, 4096]px'; larger images have been resized to meet the limit
* Alpha channels have been removed
* All images have been converted to 'jpg' format
* All images have been converted to TrueColor 'RGB'
* All images have been verified to load with 'Pillow'
* Metadata from E621 is available here
## Tags
For a comprehensive list of tags and counts, see here.
### Changes From E621
* Symbols have been prefixed with 'symbol:', e.g. 'symbol:<3'
* Aspect ratio has been prefixed with 'aspect_ratio:', e.g. 'aspect_ratio:16_9'
* All categories except 'general' have been prefixed with the category name, e.g. 'artist:somename'. The categories are:
* 'artist'
* 'copyright'
* 'character'
* 'species'
* 'invalid'
* 'meta'
* 'lore'
### Additional Tags
* Image rating
* 'rating:explicit'
* 'rating:questionable'
* 'rating:safe' | [
"# E621 Rising: Curated Image Dataset v1\n\n441,623 images (~200GB) downloaded from 'URL' with tags.\n\nThis is a curated dataset, picked from the E621 Rising: Raw Image Dataset v1 available here.",
"## Image Processing\n* Only 'jpg' and 'png' images were considered\n* Image width and height have been clamped to '(0, 4096]px'; larger images have been resized to meet the limit\n* Alpha channels have been removed\n* All images have been converted to 'jpg' format\n* All images have been converted to TrueColor 'RGB'\n* All images have been verified to load with 'Pillow'\n* Metadata from E621 is available here",
"## Tags\nFor a comprehensive list of tags and counts, see here.",
"### Changes From E621\n* Symbols have been prefixed with 'symbol:', e.g. 'symbol:<3'\n* Aspect ratio has been prefixed with 'aspect_ratio:', e.g. 'aspect_ratio:16_9'\n* All categories except 'general' have been prefixed with the category name, e.g. 'artist:somename'. The categories are:\n * 'artist'\n * 'copyright'\n * 'character'\n * 'species'\n * 'invalid'\n * 'meta'\n * 'lore'",
"### Additional Tags\n* Image rating\n * 'rating:explicit'\n * 'rating:questionable'\n * 'rating:safe'"
] | [
"TAGS\n#size_categories-100K<n<1M #not-for-all-audiences #region-us \n",
"# E621 Rising: Curated Image Dataset v1\n\n441,623 images (~200GB) downloaded from 'URL' with tags.\n\nThis is a curated dataset, picked from the E621 Rising: Raw Image Dataset v1 available here.",
"## Image Processing\n* Only 'jpg' and 'png' images were considered\n* Image width and height have been clamped to '(0, 4096]px'; larger images have been resized to meet the limit\n* Alpha channels have been removed\n* All images have been converted to 'jpg' format\n* All images have been converted to TrueColor 'RGB'\n* All images have been verified to load with 'Pillow'\n* Metadata from E621 is available here",
"## Tags\nFor a comprehensive list of tags and counts, see here.",
"### Changes From E621\n* Symbols have been prefixed with 'symbol:', e.g. 'symbol:<3'\n* Aspect ratio has been prefixed with 'aspect_ratio:', e.g. 'aspect_ratio:16_9'\n* All categories except 'general' have been prefixed with the category name, e.g. 'artist:somename'. The categories are:\n * 'artist'\n * 'copyright'\n * 'character'\n * 'species'\n * 'invalid'\n * 'meta'\n * 'lore'",
"### Additional Tags\n* Image rating\n * 'rating:explicit'\n * 'rating:questionable'\n * 'rating:safe'"
] |
49d8d456e29bfc46b6886eeaffed3795b58b1adf | # Dataset Card for "copy_dataset_trimmed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanjin7/copy_dataset_trimmed | [
"region:us"
] | 2023-01-15T06:30:47+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "text_clean", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "only_emojis", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 23566873, "num_examples": 46055}, {"name": "test", "num_bytes": 3195980, "num_examples": 6021}, {"name": "val", "num_bytes": 4095174, "num_examples": 8128}], "download_size": 21524666, "dataset_size": 30858027}} | 2023-01-16T16:35:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "copy_dataset_trimmed"
More Information needed | [
"# Dataset Card for \"copy_dataset_trimmed\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"copy_dataset_trimmed\"\n\nMore Information needed"
] |
29ce7633e32a291c7a89009fba542691a585475c | # Dataset Card for "copy_dataset_primaries"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanjin7/copy_dataset_primaries | [
"region:us"
] | 2023-01-15T07:18:57+00:00 | {"dataset_info": {"features": [{"name": "value", "sequence": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 254708030, "num_examples": 586243}], "download_size": 21073974, "dataset_size": 254708030}} | 2023-01-16T16:27:54+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "copy_dataset_primaries"
More Information needed | [
"# Dataset Card for \"copy_dataset_primaries\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"copy_dataset_primaries\"\n\nMore Information needed"
] |
f4934776f0c4347b4375569a21e676190c8bfece | # AutoTrain Dataset for project: soft-tissue-tumor-species
## Dataset Description
This dataset has been automatically processed by AutoTrain for project bone-tumor-species.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<512x512 RGB PIL image>",
"target": 16
},
{
"image": "<512x512 RGB PIL image>",
"target": 29
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['Adipose Tissue', 'Alveolar Rhabdomyosarcoma', 'Alveolar Soft Part Sarcoma', 'Angioleiomyoma', 'Angiosarcoma', 'Clear Cell Sarcoma', 'Dedifferentiated Liposarcoma', 'Dense Connective Tissue', 'Dermatofibrosarcoma Protuberans', 'Desmoplastic Small Round Cell Tumor', 'Elastic Connective Tissue', 'Elastofibroma', 'Embryonal Rhabdomyosarcoma', 'Epithelioid Hemangioendothelioma', 'Epithelioid Sarcoma', 'Extraskeletal Myxoid Chondrosarcoma', 'Fibrocartilage', 'Fibroma (of Tendon Sheath)', 'Fibromatosis', 'Fibrosarcoma', 'Fibrous Histiocytoma', 'Glomus Tumor', 'Granular Cell Tumor', 'Hemangioma', 'Heterotopic Ossification (Myositis Ossificans)', 'Hibernoma', 'Hyaline Cartilage', 'Inflammatory Myofibroblastic Tumor', 'Kaposi Sarcoma', 'Leiomyosarcoma', 'Lipoblastoma', 'Lipoma', 'Loose Connective Tissue', 'Low Grade Fibromyxoid Sarcoma', 'Malignant Peripheral Nerve Sheath Tumor', 'Myopericytoma', 'Myxofibrosarcoma', 'Myxoid Liposarcoma', 'Neurofibroma', 'Nodular Fasciitis', 'Perineurioma', 'Proliferative Fasciitis', 'Rhabdomyoma', 'Schwannoma', 'Sclerosing Epithelioid Fibrosarcoma', 'Skeletal Muscle', 'Solitary Fibrous Tumor', 'Spindle Cell Lipoma', 'Synovial Sarcoma', 'Tenosynovial Giant Cell Tumor', 'Tumoral Calcinosis', 'Undifferentiated Pleiomorphic Sarcoma'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 6268 |
| valid | 1570 |
| itslogannye/softTissueTumorousLesions | [
"task_categories:image-classification",
"region:us"
] | 2023-01-15T08:31:08+00:00 | {"task_categories": ["image-classification"]} | 2023-01-15T09:05:06+00:00 | [] | [] | TAGS
#task_categories-image-classification #region-us
| AutoTrain Dataset for project: soft-tissue-tumor-species
========================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project bone-tumor-species.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
31f4a29fd16f130c75be983ed9b61aef629ace44 | # Dataset Card for "wikisource-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zombely/wikisource-small | [
"region:us"
] | 2023-01-15T09:28:13+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "ground_truth", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24302805827.009, "num_examples": 15549}], "download_size": 19231095073, "dataset_size": 24302805827.009}} | 2023-01-15T18:48:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "wikisource-small"
More Information needed | [
"# Dataset Card for \"wikisource-small\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikisource-small\"\n\nMore Information needed"
] |
b286c1ab71508a8a54cc6e984fbae20ee5e5784c | # Dataset Card for HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese
## Dataset Description
- **Homepage:** http://143.107.183.175:14581/
- **Repository:** https://github.com/franciellevargas/HateBR
- **Paper:** https://aclanthology.org/2022.lrec-1.777/
- **Leaderboard:**
- **Point of Contact:** https://franciellevargas.github.io/
### Dataset Summary
HateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area.
**Relevant Links:**
* [**Demo: Brasil Sem Ódio**](http://143.107.183.175:14581/)
* [**MOL - Multilingual Offensive Lexicon Annotated with Contextual Information**](https://github.com/franciellevargas/MOL)
### Supported Tasks and Leaderboards
Hate Speech Detection
### Languages
Portuguese
## Dataset Structure
### Data Instances
```
{'instagram_comments': 'Hipocrita!!',
'offensive_language': True,
'offensiveness_levels': 2,
'antisemitism': False,
'apology_for_the_dictatorship': False,
'fatphobia': False,
'homophobia': False,
'partyism': False,
'racism': False,
'religious_intolerance': False,
'sexism': False,
'xenophobia': False,
'offensive_&_non-hate_speech': True,
'non-offensive': False,
'specialist_1_hate_speech': False,
'specialist_2_hate_speech': False,
'specialist_3_hate_speech': False
}
```
### Data Fields
* **instagram_comments**: Instagram comments.
* **offensive_language**: A classification of comments as either offensive (True) or non-offensive (False).
* **offensiveness_levels**: A classification of comments based on their level of offensiveness, including highly offensive (3), moderately offensive (2), slightly offensive (1) and non-offensive (0).
* **antisemitism**: A classification of whether or not the comment contains antisemitic language.
* **apology_for_the_dictatorship**: A classification of whether or not the comment praises the military dictatorship period in Brazil.
* **fatphobia**: A classification of whether or not the comment contains language that promotes fatphobia.
* **homophobia**: A classification of whether or not the comment contains language that promotes homophobia.
* **partyism**: A classification of whether or not the comment contains language that promotes partyism.
* **racism**: A classification of whether or not the comment contains racist language.
* **religious_intolerance**: A classification of whether or not the comment contains language that promotes religious intolerance.
* **sexism**: A classification of whether or not the comment contains sexist language.
* **xenophobia**: A classification of whether or not the comment contains language that promotes xenophobia.
* **offensive_&_no-hate_speech**: A classification of whether or not the comment is offensive but does not contain hate speech.
* **specialist_1_hate_speech**: A classification of whether or not the comment was annotated by the first specialist as hate speech.
* **specialist_2_hate_speech**: A classification of whether or not the comment was annotated by the second specialist as hate speech.
* **specialist_3_hate_speech**: A classification of whether or not the comment was annotated by the third specialist as hate speech.
### Data Splits
The original authors of the dataset did not propose a standard data split. To address this, we use the [multi-label data stratification technique](http://scikit.ml/stratification.html) implemented at the scikit-multilearn library to propose a train-validation-test split. This method considers all classes for hate speech in the data and attempts to balance the representation of each class in the split.
| name |train|validation|test|
|---------|----:|----:|----:|
|hatebr|4480|1120|1400|
## Considerations for Using the Data
### Discussion of Biases
Please refer to [the HateBR paper](https://aclanthology.org/2022.lrec-1.777/) for a discussion of biases.
### Licensing Information
The HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of [SINCH](https://www.sinch.com/).
### Citation Information
```
@inproceedings{vargas2022hatebr,
title={HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection},
author={Vargas, Francielle and Carvalho, Isabelle and de G{\'o}es, Fabiana Rodrigues and Pardo, Thiago and Benevenuto, Fabr{\'\i}cio},
booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference},
pages={7174--7183},
year={2022}
}
```
### Contributions
Thanks to [@ruanchaves](https://github.com/ruanchaves) for adding this dataset. | ruanchaves/hatebr | [
"task_categories:text-classification",
"task_ids:hate-speech-detection",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:pt",
"instagram",
"doi:10.57967/hf/0274",
"region:us"
] | 2023-01-15T11:11:33+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese", "tags": ["instagram"]} | 2023-04-13T12:39:40+00:00 | [] | [
"pt"
] | TAGS
#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #instagram #doi-10.57967/hf/0274 #region-us
| Dataset Card for HateBR - Offensive Language and Hate Speech Dataset in Brazilian Portuguese
============================================================================================
Dataset Description
-------------------
* Homepage: http://143.107.183.175:14581/
* Repository: URL
* Paper: URL
* Leaderboard:
* Point of Contact: URL
### Dataset Summary
HateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area.
Relevant Links:
* Demo: Brasil Sem Ódio
* MOL - Multilingual Offensive Lexicon Annotated with Contextual Information
### Supported Tasks and Leaderboards
Hate Speech Detection
### Languages
Portuguese
Dataset Structure
-----------------
### Data Instances
### Data Fields
* instagram\_comments: Instagram comments.
* offensive\_language: A classification of comments as either offensive (True) or non-offensive (False).
* offensiveness\_levels: A classification of comments based on their level of offensiveness, including highly offensive (3), moderately offensive (2), slightly offensive (1) and non-offensive (0).
* antisemitism: A classification of whether or not the comment contains antisemitic language.
* apology\_for\_the\_dictatorship: A classification of whether or not the comment praises the military dictatorship period in Brazil.
* fatphobia: A classification of whether or not the comment contains language that promotes fatphobia.
* homophobia: A classification of whether or not the comment contains language that promotes homophobia.
* partyism: A classification of whether or not the comment contains language that promotes partyism.
* racism: A classification of whether or not the comment contains racist language.
* religious\_intolerance: A classification of whether or not the comment contains language that promotes religious intolerance.
* sexism: A classification of whether or not the comment contains sexist language.
* xenophobia: A classification of whether or not the comment contains language that promotes xenophobia.
* offensive\_&\_no-hate\_speech: A classification of whether or not the comment is offensive but does not contain hate speech.
* specialist\_1\_hate\_speech: A classification of whether or not the comment was annotated by the first specialist as hate speech.
* specialist\_2\_hate\_speech: A classification of whether or not the comment was annotated by the second specialist as hate speech.
* specialist\_3\_hate\_speech: A classification of whether or not the comment was annotated by the third specialist as hate speech.
### Data Splits
The original authors of the dataset did not propose a standard data split. To address this, we use the multi-label data stratification technique implemented at the scikit-multilearn library to propose a train-validation-test split. This method considers all classes for hate speech in the data and attempts to balance the representation of each class in the split.
Considerations for Using the Data
---------------------------------
### Discussion of Biases
Please refer to the HateBR paper for a discussion of biases.
### Licensing Information
The HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of SINCH.
### Contributions
Thanks to @ruanchaves for adding this dataset.
| [
"### Dataset Summary\n\n\nHateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area.\n\n\nRelevant Links:\n\n\n* Demo: Brasil Sem Ódio\n* MOL - Multilingual Offensive Lexicon Annotated with Contextual Information",
"### Supported Tasks and Leaderboards\n\n\nHate Speech Detection",
"### Languages\n\n\nPortuguese\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* instagram\\_comments: Instagram comments.\n* offensive\\_language: A classification of comments as either offensive (True) or non-offensive (False).\n* offensiveness\\_levels: A classification of comments based on their level of offensiveness, including highly offensive (3), moderately offensive (2), slightly offensive (1) and non-offensive (0).\n* antisemitism: A classification of whether or not the comment contains antisemitic language.\n* apology\\_for\\_the\\_dictatorship: A classification of whether or not the comment praises the military dictatorship period in Brazil.\n* fatphobia: A classification of whether or not the comment contains language that promotes fatphobia.\n* homophobia: A classification of whether or not the comment contains language that promotes homophobia.\n* partyism: A classification of whether or not the comment contains language that promotes partyism.\n* racism: A classification of whether or not the comment contains racist language.\n* religious\\_intolerance: A classification of whether or not the comment contains language that promotes religious intolerance.\n* sexism: A classification of whether or not the comment contains sexist language.\n* xenophobia: A classification of whether or not the comment contains language that promotes xenophobia.\n* offensive\\_&\\_no-hate\\_speech: A classification of whether or not the comment is offensive but does not contain hate speech.\n* specialist\\_1\\_hate\\_speech: A classification of whether or not the comment was annotated by the first specialist as hate speech.\n* specialist\\_2\\_hate\\_speech: A classification of whether or not the comment was annotated by the second specialist as hate speech.\n* specialist\\_3\\_hate\\_speech: A classification of whether or not the comment was annotated by the third specialist as hate speech.",
"### Data Splits\n\n\nThe original authors of the dataset did not propose a standard data split. To address this, we use the multi-label data stratification technique implemented at the scikit-multilearn library to propose a train-validation-test split. This method considers all classes for hate speech in the data and attempts to balance the representation of each class in the split.\n\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Discussion of Biases\n\n\nPlease refer to the HateBR paper for a discussion of biases.",
"### Licensing Information\n\n\nThe HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of SINCH.",
"### Contributions\n\n\nThanks to @ruanchaves for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Portuguese #instagram #doi-10.57967/hf/0274 #region-us \n",
"### Dataset Summary\n\n\nHateBR is the first large-scale expert annotated corpus of Brazilian Instagram comments for hate speech and offensive language detection on the web and social media. The HateBR corpus was collected from Brazilian Instagram comments of politicians and manually annotated by specialists. It is composed of 7,000 documents annotated according to three different layers: a binary classification (offensive versus non-offensive comments), offensiveness-level (highly, moderately, and slightly offensive messages), and nine hate speech groups (xenophobia, racism, homophobia, sexism, religious intolerance, partyism, apology for the dictatorship, antisemitism, and fatphobia). Each comment was annotated by three different annotators and achieved high inter-annotator agreement. Furthermore, baseline experiments were implemented reaching 85% of F1-score outperforming the current literature models for the Portuguese language. Accordingly, we hope that the proposed expertly annotated corpus may foster research on hate speech and offensive language detection in the Natural Language Processing area.\n\n\nRelevant Links:\n\n\n* Demo: Brasil Sem Ódio\n* MOL - Multilingual Offensive Lexicon Annotated with Contextual Information",
"### Supported Tasks and Leaderboards\n\n\nHate Speech Detection",
"### Languages\n\n\nPortuguese\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* instagram\\_comments: Instagram comments.\n* offensive\\_language: A classification of comments as either offensive (True) or non-offensive (False).\n* offensiveness\\_levels: A classification of comments based on their level of offensiveness, including highly offensive (3), moderately offensive (2), slightly offensive (1) and non-offensive (0).\n* antisemitism: A classification of whether or not the comment contains antisemitic language.\n* apology\\_for\\_the\\_dictatorship: A classification of whether or not the comment praises the military dictatorship period in Brazil.\n* fatphobia: A classification of whether or not the comment contains language that promotes fatphobia.\n* homophobia: A classification of whether or not the comment contains language that promotes homophobia.\n* partyism: A classification of whether or not the comment contains language that promotes partyism.\n* racism: A classification of whether or not the comment contains racist language.\n* religious\\_intolerance: A classification of whether or not the comment contains language that promotes religious intolerance.\n* sexism: A classification of whether or not the comment contains sexist language.\n* xenophobia: A classification of whether or not the comment contains language that promotes xenophobia.\n* offensive\\_&\\_no-hate\\_speech: A classification of whether or not the comment is offensive but does not contain hate speech.\n* specialist\\_1\\_hate\\_speech: A classification of whether or not the comment was annotated by the first specialist as hate speech.\n* specialist\\_2\\_hate\\_speech: A classification of whether or not the comment was annotated by the second specialist as hate speech.\n* specialist\\_3\\_hate\\_speech: A classification of whether or not the comment was annotated by the third specialist as hate speech.",
"### Data Splits\n\n\nThe original authors of the dataset did not propose a standard data split. To address this, we use the multi-label data stratification technique implemented at the scikit-multilearn library to propose a train-validation-test split. This method considers all classes for hate speech in the data and attempts to balance the representation of each class in the split.\n\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Discussion of Biases\n\n\nPlease refer to the HateBR paper for a discussion of biases.",
"### Licensing Information\n\n\nThe HateBR dataset, including all its components, is provided strictly for academic and research purposes. The use of the dataset for any commercial or non-academic purpose is expressly prohibited without the prior written consent of SINCH.",
"### Contributions\n\n\nThanks to @ruanchaves for adding this dataset."
] |
f2e34fe16864d3c41ab7fb375e997f74c3f7aad2 |
## CysPresso
A machine learning approach to predict the recombinant expressibility of cysteine-dense peptides in mammalian cells based on their primary sequence, compatible with multiple types of protein representations generated by deep learning solutions.
## Associated paper
CysPresso: Prediction of cysteine-dense peptide expression in mammalian cells using deep learning protein representations. BioRxiv link: https://www.biorxiv.org/content/10.1101/2022.09.17.508377v1
## Code
The CysPresso repo can be found at https://github.com/Zebreu/cyspresso/
---
license: mit
---
| TonyKYLim/CysPresso | [
"doi:10.57967/hf/0628",
"region:us"
] | 2023-01-15T13:52:23+00:00 | {} | 2023-03-04T22:59:20+00:00 | [] | [] | TAGS
#doi-10.57967/hf/0628 #region-us
|
## CysPresso
A machine learning approach to predict the recombinant expressibility of cysteine-dense peptides in mammalian cells based on their primary sequence, compatible with multiple types of protein representations generated by deep learning solutions.
## Associated paper
CysPresso: Prediction of cysteine-dense peptide expression in mammalian cells using deep learning protein representations. BioRxiv link: URL
## Code
The CysPresso repo can be found at URL
---
license: mit
---
| [
"## CysPresso\nA machine learning approach to predict the recombinant expressibility of cysteine-dense peptides in mammalian cells based on their primary sequence, compatible with multiple types of protein representations generated by deep learning solutions.",
"## Associated paper\n\nCysPresso: Prediction of cysteine-dense peptide expression in mammalian cells using deep learning protein representations. BioRxiv link: URL",
"## Code\n\nThe CysPresso repo can be found at URL\n\n---\nlicense: mit\n---"
] | [
"TAGS\n#doi-10.57967/hf/0628 #region-us \n",
"## CysPresso\nA machine learning approach to predict the recombinant expressibility of cysteine-dense peptides in mammalian cells based on their primary sequence, compatible with multiple types of protein representations generated by deep learning solutions.",
"## Associated paper\n\nCysPresso: Prediction of cysteine-dense peptide expression in mammalian cells using deep learning protein representations. BioRxiv link: URL",
"## Code\n\nThe CysPresso repo can be found at URL\n\n---\nlicense: mit\n---"
] |
1822b80aa21684f24907e6818cbc7f665ef2b9d1 | # Dataset Card for "beautiful_interesting_spectacular_photo_model_30000_with_generated_captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yuvalkirstain/beautiful_interesting_spectacular_photo_model_30000_with_generated_captions | [
"region:us"
] | 2023-01-15T14:08:05+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "pclean", "dtype": "float64"}, {"name": "generated_caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120069364.0, "num_examples": 228}], "download_size": 120060100, "dataset_size": 120069364.0}} | 2023-01-17T18:00:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beautiful_interesting_spectacular_photo_model_30000_with_generated_captions"
More Information needed | [
"# Dataset Card for \"beautiful_interesting_spectacular_photo_model_30000_with_generated_captions\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beautiful_interesting_spectacular_photo_model_30000_with_generated_captions\"\n\nMore Information needed"
] |
ca73c2889a0c6cb3b7493d56da6e73f6a2229d77 | # Dataset Card for "portraits3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | conorcl/portraits3 | [
"region:us"
] | 2023-01-15T15:46:20+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35873206.596, "num_examples": 1343}], "download_size": 35191726, "dataset_size": 35873206.596}} | 2023-01-16T22:45:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "portraits3"
More Information needed | [
"# Dataset Card for \"portraits3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"portraits3\"\n\nMore Information needed"
] |
501f1909b6c1ff30926d991b94539f4c58165cc7 |
# Dataset Card for OpenSubtitles
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://opus.nlpl.eu/OpenSubtitles.php
- **Repository:** None
- **Paper:** http://www.lrec-conf.org/proceedings/lrec2016/pdf/62_Paper.pdf
- **Leaderboard:** [More Information Needed]
- **Point of Contact:** [More Information Needed]
### Dataset Summary
This dataset is a subset from the en-nl open_subtitles dataset.
It contains only subtitles of tv shows that have a rating of at least 8.0 with at least 1000 votes.
The subtitles are also ordered and appended into buffers several lengths, with a maximum of 370 tokens
as tokenized by the 'yhavinga/ul2-base-dutch' tokenizer.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
The languages in the dataset are:
- en
- nl
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding the open_subtitles dataset.
| yhavinga/open_subtitles_en_nl | [
"task_categories:translation",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"size_categories:1M<n<10M",
"size_categories:n<1K",
"source_datasets:original",
"language:en",
"language:nl",
"license:unknown",
"region:us"
] | 2023-01-15T16:48:34+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en", "nl"], "license": ["unknown"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K", "1M<n<10M", "n<1K"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "OpenSubtitles En Nl"} | 2023-01-15T17:02:32+00:00 | [] | [
"en",
"nl"
] | TAGS
#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1M<n<10M #size_categories-n<1K #source_datasets-original #language-English #language-Dutch #license-unknown #region-us
|
# Dataset Card for OpenSubtitles
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: None
- Paper: URL
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset is a subset from the en-nl open_subtitles dataset.
It contains only subtitles of tv shows that have a rating of at least 8.0 with at least 1000 votes.
The subtitles are also ordered and appended into buffers several lengths, with a maximum of 370 tokens
as tokenized by the 'yhavinga/ul2-base-dutch' tokenizer.
### Supported Tasks and Leaderboards
### Languages
The languages in the dataset are:
- en
- nl
## Dataset Structure
### Data Instances
Here are some examples of questions and facts:
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @abhishekkrthakur for adding the open_subtitles dataset.
| [
"# Dataset Card for OpenSubtitles",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset is a subset from the en-nl open_subtitles dataset.\nIt contains only subtitles of tv shows that have a rating of at least 8.0 with at least 1000 votes.\nThe subtitles are also ordered and appended into buffers several lengths, with a maximum of 370 tokens\nas tokenized by the 'yhavinga/ul2-base-dutch' tokenizer.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe languages in the dataset are:\n- en\n- nl",
"## Dataset Structure",
"### Data Instances\n\nHere are some examples of questions and facts:",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @abhishekkrthakur for adding the open_subtitles dataset."
] | [
"TAGS\n#task_categories-translation #annotations_creators-found #language_creators-found #multilinguality-multilingual #size_categories-10K<n<100K #size_categories-1M<n<10M #size_categories-n<1K #source_datasets-original #language-English #language-Dutch #license-unknown #region-us \n",
"# Dataset Card for OpenSubtitles",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: None\n- Paper: URL\n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset is a subset from the en-nl open_subtitles dataset.\nIt contains only subtitles of tv shows that have a rating of at least 8.0 with at least 1000 votes.\nThe subtitles are also ordered and appended into buffers several lengths, with a maximum of 370 tokens\nas tokenized by the 'yhavinga/ul2-base-dutch' tokenizer.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nThe languages in the dataset are:\n- en\n- nl",
"## Dataset Structure",
"### Data Instances\n\nHere are some examples of questions and facts:",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @abhishekkrthakur for adding the open_subtitles dataset."
] |
248f2ec6df3f7de5244c3719ce74f26159d6dddd | # Dataset Card for "my_section_5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Rami/my_section_5 | [
"region:us"
] | 2023-01-15T17:31:12+00:00 | {"dataset_info": {"features": [{"name": "body", "dtype": "string"}, {"name": "question_id", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "meta_data", "struct": [{"name": "AcceptedAnswerId", "dtype": "string"}, {"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "Tags", "sequence": "string"}, {"name": "Title", "dtype": "string"}]}, {"name": "answer", "struct": [{"name": "body", "dtype": "string"}, {"name": "comments", "list": [{"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "Score", "dtype": "string"}, {"name": "body", "dtype": "string"}]}, {"name": "meta_data", "struct": [{"name": "CommentCount", "dtype": "string"}, {"name": "ContentLicense", "dtype": "string"}, {"name": "CreationDate", "dtype": "string"}, {"name": "Id", "dtype": "string"}, {"name": "ParentId", "dtype": "string"}, {"name": "Score", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 557588, "num_examples": 71}], "download_size": 236408, "dataset_size": 557588}} | 2023-01-21T18:07:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "my_section_5"
More Information needed | [
"# Dataset Card for \"my_section_5\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"my_section_5\"\n\nMore Information needed"
] |
45e4439f9e52be06dd302c85636ecfc71e53172b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: ilos-vigil/bigbird-small-indonesian-nli
* Dataset: indonli
* Config: indonli
* Split: test_expert
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ilos-vigil](https://huggingface.co/ilos-vigil) for evaluating this model. | autoevaluate/autoeval-eval-indonli-indonli-42cf53-2902084628 | [
"autotrain",
"evaluation",
"region:us"
] | 2023-01-15T18:37:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["indonli"], "eval_info": {"task": "natural_language_inference", "model": "ilos-vigil/bigbird-small-indonesian-nli", "metrics": [], "dataset_name": "indonli", "dataset_config": "indonli", "dataset_split": "test_expert", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}} | 2023-01-15T18:38:38+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: ilos-vigil/bigbird-small-indonesian-nli
* Dataset: indonli
* Config: indonli
* Split: test_expert
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ilos-vigil for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: ilos-vigil/bigbird-small-indonesian-nli\n* Dataset: indonli\n* Config: indonli\n* Split: test_expert\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ilos-vigil for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: ilos-vigil/bigbird-small-indonesian-nli\n* Dataset: indonli\n* Config: indonli\n* Split: test_expert\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ilos-vigil for evaluating this model."
] |
2f3f894574938ff122f1f8d6be289897c337c37c |
<div align="center">
<img width="640" alt="keremberke/pothole-segmentation" src="https://huggingface.co/datasets/keremberke/pothole-segmentation/resolve/main/thumbnail.jpg">
</div>
### Dataset Labels
```
['pothole']
```
### Number of Images
```json
{'test': 5, 'train': 80, 'valid': 5}
```
### How to Use
- Install [datasets](https://pypi.org/project/datasets/):
```bash
pip install datasets
```
- Load the dataset:
```python
from datasets import load_dataset
ds = load_dataset("keremberke/pothole-segmentation", name="full")
example = ds['train'][0]
```
### Roboflow Dataset Page
[https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9/dataset/4](https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9/dataset/4?ref=roboflow2huggingface)
### Citation
```
@misc{ pothole-detection-irkz9_dataset,
title = { Pothole Detection Dataset },
type = { Open Source Dataset },
author = { IMACS Pothole Detection },
howpublished = { \\url{ https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9 } },
url = { https://universe.roboflow.com/imacs-pothole-detection-wo8mu/pothole-detection-irkz9 },
journal = { Roboflow Universe },
publisher = { Roboflow },
year = { 2023 },
month = { jan },
note = { visited on 2023-01-15 },
}
```
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via roboflow.com on January 15, 2023 at 6:38 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 90 images.
Pothole are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| keremberke/pothole-segmentation | [
"task_categories:image-segmentation",
"roboflow",
"roboflow2huggingface",
"Construction",
"Self Driving",
"Transportation",
"Damage Risk",
"region:us"
] | 2023-01-15T18:38:37+00:00 | {"task_categories": ["image-segmentation"], "tags": ["roboflow", "roboflow2huggingface", "Construction", "Self Driving", "Transportation", "Damage Risk"]} | 2023-01-15T18:38:49+00:00 | [] | [] | TAGS
#task_categories-image-segmentation #roboflow #roboflow2huggingface #Construction #Self Driving #Transportation #Damage Risk #region-us
|
<div align="center">
<img width="640" alt="keremberke/pothole-segmentation" src="URL
</div>
### Dataset Labels
### Number of Images
### How to Use
- Install datasets:
- Load the dataset:
### Roboflow Dataset Page
URL
### License
CC BY 4.0
### Dataset Summary
This dataset was exported via URL on January 15, 2023 at 6:38 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 90 images.
Pothole are annotated in COCO format.
The following pre-processing was applied to each image:
No image augmentation techniques were applied.
| [
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 15, 2023 at 6:38 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 90 images.\nPothole are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied."
] | [
"TAGS\n#task_categories-image-segmentation #roboflow #roboflow2huggingface #Construction #Self Driving #Transportation #Damage Risk #region-us \n",
"### Dataset Labels",
"### Number of Images",
"### How to Use\n\n- Install datasets:\n\n\n\n- Load the dataset:",
"### Roboflow Dataset Page\nURL",
"### License\nCC BY 4.0",
"### Dataset Summary\nThis dataset was exported via URL on January 15, 2023 at 6:38 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 90 images.\nPothole are annotated in COCO format.\n\nThe following pre-processing was applied to each image:\n\nNo image augmentation techniques were applied."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.