sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
be0e675d260b50a15e89f537fb2bc221e8a2fb06 | # Dataset Card for "AI-generated-inpaintings-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | VaggP/AI-generated-inpaintings-dataset | [
"region:us"
] | 2024-01-11T18:08:01+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "inpainting", "1": "original"}}}}], "splits": [{"name": "train", "num_bytes": 6260005434.8, "num_examples": 7120}, {"name": "test", "num_bytes": 2607086372.808, "num_examples": 3006}], "download_size": 5811773091, "dataset_size": 8867091807.608}} | 2024-01-12T13:15:49+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "AI-generated-inpaintings-dataset"
More Information needed | [
"# Dataset Card for \"AI-generated-inpaintings-dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"AI-generated-inpaintings-dataset\"\n\nMore Information needed"
] |
cf514b14d99be19ac52cb9fe79bf68483faa5290 |
# Dataset of Kristen Wright (Arknights)
This is the dataset of Kristen Wright (Arknights), containing 76 images and their tags.
The core tags of this character are `long_hair, blonde_hair, blue_eyes, hairband, black_hairband, animal_ears, dog_ears, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 76 | 129.88 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kristen_arknights/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 76 | 61.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kristen_arknights/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 176 | 131.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kristen_arknights/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 76 | 107.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kristen_arknights/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 176 | 205.95 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kristen_arknights/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kristen_arknights',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 21 |  |  |  |  |  | shirt, 1girl, solo, upper_body, simple_background, smile, white_background, jacket, closed_mouth, looking_at_viewer, long_sleeves, sketch |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | shirt | 1girl | solo | upper_body | simple_background | smile | white_background | jacket | closed_mouth | looking_at_viewer | long_sleeves | sketch |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:-------------|:--------------------|:--------|:-------------------|:---------|:---------------|:--------------------|:---------------|:---------|
| 0 | 21 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/kristen_arknights | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-11T18:16:38+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-11T18:35:37+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Kristen Wright (Arknights)
=====================================
This is the dataset of Kristen Wright (Arknights), containing 76 images and their tags.
The core tags of this character are 'long\_hair, blonde\_hair, blue\_eyes, hairband, black\_hairband, animal\_ears, dog\_ears, very\_long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
0e6ce017cc22ad329c19d40901ddcd1ada90619d |
尝试解决"llm repetition problem",使用分词模型对oaast语料进行“结巴化”数据增强,提供更强的重复内容拒绝效果。
Attempts to solve the "llm repetition problem" by using a segmentation model to enhance the oaast corpus with "stuttering" data to provide stronger rejection of duplicate content.
其次,还过滤掉了所有自我认知的微调样本。
Second, it also filters out all the fine-tuned samples of self-cognition.
files:
- oaast_rm_full_jieba.jsonl : word level repeat
- oaast_rm_full_sent_jieba.jsonl : sentence level repeat | lenML/oaast_rm_full_jieba | [
"size_categories:10K<n<100K",
"language:en",
"language:es",
"language:ru",
"language:de",
"language:pl",
"language:th",
"language:vi",
"language:sv",
"language:bn",
"language:da",
"language:he",
"language:it",
"language:fa",
"language:sk",
"language:id",
"language:nb",
"language:el",
"language:nl",
"language:hu",
"language:eu",
"language:zh",
"language:eo",
"language:ja",
"language:ca",
"language:cs",
"language:bg",
"language:fi",
"language:pt",
"language:tr",
"language:ro",
"language:ar",
"language:uk",
"language:gl",
"language:fr",
"language:ko",
"license:apache-2.0",
"human-feedback",
"region:us"
] | 2024-01-11T18:16:53+00:00 | {"language": ["en", "es", "ru", "de", "pl", "th", "vi", "sv", "bn", "da", "he", "it", "fa", "sk", "id", "nb", "el", "nl", "hu", "eu", "zh", "eo", "ja", "ca", "cs", "bg", "fi", "pt", "tr", "ro", "ar", "uk", "gl", "fr", "ko"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "tags": ["human-feedback"]} | 2024-01-13T07:36:20+00:00 | [] | [
"en",
"es",
"ru",
"de",
"pl",
"th",
"vi",
"sv",
"bn",
"da",
"he",
"it",
"fa",
"sk",
"id",
"nb",
"el",
"nl",
"hu",
"eu",
"zh",
"eo",
"ja",
"ca",
"cs",
"bg",
"fi",
"pt",
"tr",
"ro",
"ar",
"uk",
"gl",
"fr",
"ko"
] | TAGS
#size_categories-10K<n<100K #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #region-us
|
尝试解决"llm repetition problem",使用分词模型对oaast语料进行“结巴化”数据增强,提供更强的重复内容拒绝效果。
Attempts to solve the "llm repetition problem" by using a segmentation model to enhance the oaast corpus with "stuttering" data to provide stronger rejection of duplicate content.
其次,还过滤掉了所有自我认知的微调样本。
Second, it also filters out all the fine-tuned samples of self-cognition.
files:
- oaast_rm_full_jieba.jsonl : word level repeat
- oaast_rm_full_sent_jieba.jsonl : sentence level repeat | [] | [
"TAGS\n#size_categories-10K<n<100K #language-English #language-Spanish #language-Russian #language-German #language-Polish #language-Thai #language-Vietnamese #language-Swedish #language-Bengali #language-Danish #language-Hebrew #language-Italian #language-Persian #language-Slovak #language-Indonesian #language-Norwegian Bokmål #language-Modern Greek (1453-) #language-Dutch #language-Hungarian #language-Basque #language-Chinese #language-Esperanto #language-Japanese #language-Catalan #language-Czech #language-Bulgarian #language-Finnish #language-Portuguese #language-Turkish #language-Romanian #language-Arabic #language-Ukrainian #language-Galician #language-French #language-Korean #license-apache-2.0 #human-feedback #region-us \n"
] |
53a62271133444a20b8336b984a30d3e205b5b54 | # Dataset Card for "jeopardy"
This is a processed version of the original [jeopardy dataset](https://huggingface.co/datasets/jeopardy).
The `test` split contains all the jeopardy questions in the following categories:
- american_history
- world_history
- literature
- science
- word_origins
The `train` split contains all the jeopardy questions from the remaining categories. | eitanturok/jeopardy | [
"region:us"
] | 2024-01-11T18:29:33+00:00 | {"dataset_info": {"features": [{"name": "category", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "continuation", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 304469, "num_examples": 2181}, {"name": "train", "num_bytes": 30495452, "num_examples": 214749}], "download_size": 19399995, "dataset_size": 30799921}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}]}]} | 2024-01-11T18:38:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "jeopardy"
This is a processed version of the original jeopardy dataset.
The 'test' split contains all the jeopardy questions in the following categories:
- american_history
- world_history
- literature
- science
- word_origins
The 'train' split contains all the jeopardy questions from the remaining categories. | [
"# Dataset Card for \"jeopardy\"\n\nThis is a processed version of the original jeopardy dataset.\n\nThe 'test' split contains all the jeopardy questions in the following categories:\n- american_history\n- world_history\n- literature\n- science\n- word_origins\n\nThe 'train' split contains all the jeopardy questions from the remaining categories."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"jeopardy\"\n\nThis is a processed version of the original jeopardy dataset.\n\nThe 'test' split contains all the jeopardy questions in the following categories:\n- american_history\n- world_history\n- literature\n- science\n- word_origins\n\nThe 'train' split contains all the jeopardy questions from the remaining categories."
] |
977329a2767f7874009839387bfbbeb87974971e |
# Dataset Card for Wikipedia English January 2024
Dataset created using this [repo](https://huggingface.co/datasets/NeuML/wikipedia) with a January 2024 Wikipedia snapshot.
This repo also has a precomputed pageviews database. This database has the aggregated number of views for each page in Wikipedia. This file is built using the Wikipedia [Pageview complete dumps](https://dumps.wikimedia.org/other/pageview_complete/readme.html)
| NeuML/wikipedia-20240101 | [
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-sa-3.0",
"license:gfdl",
"pretraining",
"language modelling",
"wikipedia",
"web",
"region:us"
] | 2024-01-11T18:37:09+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-3.0", "gfdl"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": [], "task_ids": [], "pretty_name": "Wikipedia English January 2024", "tags": ["pretraining", "language modelling", "wikipedia", "web"]} | 2024-01-11T19:01:59+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc-by-sa-3.0 #license-gfdl #pretraining #language modelling #wikipedia #web #region-us
|
# Dataset Card for Wikipedia English January 2024
Dataset created using this repo with a January 2024 Wikipedia snapshot.
This repo also has a precomputed pageviews database. This database has the aggregated number of views for each page in Wikipedia. This file is built using the Wikipedia Pageview complete dumps
| [
"# Dataset Card for Wikipedia English January 2024\n\nDataset created using this repo with a January 2024 Wikipedia snapshot.\n\nThis repo also has a precomputed pageviews database. This database has the aggregated number of views for each page in Wikipedia. This file is built using the Wikipedia Pageview complete dumps"
] | [
"TAGS\n#annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc-by-sa-3.0 #license-gfdl #pretraining #language modelling #wikipedia #web #region-us \n",
"# Dataset Card for Wikipedia English January 2024\n\nDataset created using this repo with a January 2024 Wikipedia snapshot.\n\nThis repo also has a precomputed pageviews database. This database has the aggregated number of views for each page in Wikipedia. This file is built using the Wikipedia Pageview complete dumps"
] |
e4100cb496a2d69afd6fe717d8cb5a7688c67c07 | # dataset card for "hypnosis_dataset"
##### hypnosis scripts based on Erickson progressions
This is a small dataset containing hypnosis scripts that were both obtained from legitimate (manual) sources, and also generated using the following closed and open models:
large llm:
- openai api
- cohere
- palm
open models:
- mistral-7b
- trismegistus-mistral-7b
- zephyr-7b
- mistral-anima-phi-7b
- mistral-instruct
The data has been cleaned but not altered save for formatting.
Some entries include a large, unsupervised text blob for training semantics and pattern flow.
Some repetition is present - future versions will work on enhanced cleaning and filtering, as well as additional generation using different model types that produce interesting scripts.
| jtatman/hypnosis_dataset | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"hypnosis",
"therapy",
"mental health",
"region:us"
] | 2024-01-11T18:45:24+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["question-answering", "text-generation", "conversational"], "pretty_name": "hypnosis data", "dataset_info": {"features": [{"name": "model", "dtype": "string"}, {"name": "timestamp", "dtype": "int64"}, {"name": "content", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16476379, "num_examples": 1348}], "download_size": 0, "dataset_size": 16476379}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["hypnosis", "therapy", "mental health"]} | 2024-01-11T20:10:48+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #size_categories-1K<n<10K #language-English #license-apache-2.0 #hypnosis #therapy #mental health #region-us
| # dataset card for "hypnosis_dataset"
##### hypnosis scripts based on Erickson progressions
This is a small dataset containing hypnosis scripts that were both obtained from legitimate (manual) sources, and also generated using the following closed and open models:
large llm:
- openai api
- cohere
- palm
open models:
- mistral-7b
- trismegistus-mistral-7b
- zephyr-7b
- mistral-anima-phi-7b
- mistral-instruct
The data has been cleaned but not altered save for formatting.
Some entries include a large, unsupervised text blob for training semantics and pattern flow.
Some repetition is present - future versions will work on enhanced cleaning and filtering, as well as additional generation using different model types that produce interesting scripts.
| [
"# dataset card for \"hypnosis_dataset\"",
"##### hypnosis scripts based on Erickson progressions\n\n\n\nThis is a small dataset containing hypnosis scripts that were both obtained from legitimate (manual) sources, and also generated using the following closed and open models:\nlarge llm:\n- openai api\n- cohere\n- palm\nopen models:\n- mistral-7b\n- trismegistus-mistral-7b\n- zephyr-7b\n- mistral-anima-phi-7b\n- mistral-instruct\n\nThe data has been cleaned but not altered save for formatting. \nSome entries include a large, unsupervised text blob for training semantics and pattern flow.\nSome repetition is present - future versions will work on enhanced cleaning and filtering, as well as additional generation using different model types that produce interesting scripts."
] | [
"TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #size_categories-1K<n<10K #language-English #license-apache-2.0 #hypnosis #therapy #mental health #region-us \n",
"# dataset card for \"hypnosis_dataset\"",
"##### hypnosis scripts based on Erickson progressions\n\n\n\nThis is a small dataset containing hypnosis scripts that were both obtained from legitimate (manual) sources, and also generated using the following closed and open models:\nlarge llm:\n- openai api\n- cohere\n- palm\nopen models:\n- mistral-7b\n- trismegistus-mistral-7b\n- zephyr-7b\n- mistral-anima-phi-7b\n- mistral-instruct\n\nThe data has been cleaned but not altered save for formatting. \nSome entries include a large, unsupervised text blob for training semantics and pattern flow.\nSome repetition is present - future versions will work on enhanced cleaning and filtering, as well as additional generation using different model types that produce interesting scripts."
] |
125b4584caee80acb3d8508786e36d09bbce47c5 | # Tweets
## Overview
This dataset contains texts from customers posted on Twitter regarding their air travel experiences,
whether they were upset, neutral, or satisfied with the trip and the airline's service.
## Dataset Details
The dataset is a smaller version of the original datase. This data originally came from [Crowdflower's Data for Everyone library](http://www.crowdflower.com/data-for-everyone)
The original Twitter data was scraped from February 2015, and contributors were asked first to classify positive, negative, and neutral tweets,
followed by categorizing negative reasons (such as "late flight" or "rude service").
This version contains whether the sentiment of the tweets in this set was positive (16%), neutral (21%), or negative (63%) for six US airlines.
- Dataset Name: [Twitter US Airline Sentiment](https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment)
- Language: English
- Total Size: 14,640 demonstrations
## Contents
The dataset consists of a data frame with the following columns:
- label
- text
```bash
{
"label": 0,
"text": "virginamerica why are your first fares in may over three times more than other carriers when all seats are available to select.",
}
```
## How to use
```python
from datasets import load_dataset
dataset = load_dataset("AiresPucrs/tweets", split='train')
```
## License
The Twitter US Airline Sentiment is licensed under the [Creative Commons(CC)](https://creativecommons.org/licenses/by-nc-sa/4.0/) License CC BY-NC-SA 4.0.
| AiresPucrs/tweets | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:en",
"license:cc",
"toxicity",
"region:us"
] | 2024-01-11T19:05:01+00:00 | {"language": ["en"], "license": "cc", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "Tweets", "tags": ["toxicity"], "dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1621836, "num_examples": 14640}], "download_size": 894257, "dataset_size": 1621836}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T19:12:57+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc #toxicity #region-us
| # Tweets
## Overview
This dataset contains texts from customers posted on Twitter regarding their air travel experiences,
whether they were upset, neutral, or satisfied with the trip and the airline's service.
## Dataset Details
The dataset is a smaller version of the original datase. This data originally came from Crowdflower's Data for Everyone library
The original Twitter data was scraped from February 2015, and contributors were asked first to classify positive, negative, and neutral tweets,
followed by categorizing negative reasons (such as "late flight" or "rude service").
This version contains whether the sentiment of the tweets in this set was positive (16%), neutral (21%), or negative (63%) for six US airlines.
- Dataset Name: Twitter US Airline Sentiment
- Language: English
- Total Size: 14,640 demonstrations
## Contents
The dataset consists of a data frame with the following columns:
- label
- text
## How to use
## License
The Twitter US Airline Sentiment is licensed under the Creative Commons(CC) License CC BY-NC-SA 4.0.
| [
"# Tweets",
"## Overview \n\nThis dataset contains texts from customers posted on Twitter regarding their air travel experiences,\nwhether they were upset, neutral, or satisfied with the trip and the airline's service.",
"## Dataset Details\n\nThe dataset is a smaller version of the original datase. This data originally came from Crowdflower's Data for Everyone library\n The original Twitter data was scraped from February 2015, and contributors were asked first to classify positive, negative, and neutral tweets,\nfollowed by categorizing negative reasons (such as \"late flight\" or \"rude service\").\nThis version contains whether the sentiment of the tweets in this set was positive (16%), neutral (21%), or negative (63%) for six US airlines.\n\n- Dataset Name: Twitter US Airline Sentiment\n- Language: English\n- Total Size: 14,640 demonstrations",
"## Contents\n\nThe dataset consists of a data frame with the following columns:\n\n- label\n- text",
"## How to use",
"## License\n\nThe Twitter US Airline Sentiment is licensed under the Creative Commons(CC) License CC BY-NC-SA 4.0."
] | [
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-English #license-cc #toxicity #region-us \n",
"# Tweets",
"## Overview \n\nThis dataset contains texts from customers posted on Twitter regarding their air travel experiences,\nwhether they were upset, neutral, or satisfied with the trip and the airline's service.",
"## Dataset Details\n\nThe dataset is a smaller version of the original datase. This data originally came from Crowdflower's Data for Everyone library\n The original Twitter data was scraped from February 2015, and contributors were asked first to classify positive, negative, and neutral tweets,\nfollowed by categorizing negative reasons (such as \"late flight\" or \"rude service\").\nThis version contains whether the sentiment of the tweets in this set was positive (16%), neutral (21%), or negative (63%) for six US airlines.\n\n- Dataset Name: Twitter US Airline Sentiment\n- Language: English\n- Total Size: 14,640 demonstrations",
"## Contents\n\nThe dataset consists of a data frame with the following columns:\n\n- label\n- text",
"## How to use",
"## License\n\nThe Twitter US Airline Sentiment is licensed under the Creative Commons(CC) License CC BY-NC-SA 4.0."
] |
c721145b718b97120448381563238f52e8dc2c49 |
# GUM Corpus V9.2.0
- Project: https://github.com/amir-zeldes/gum
- Data source: https://github.com/amir-zeldes/gum/commit/3b0ab7d11911be1695e4dacadb28a7a1df230bdb
## Details
An English corpus annotated for coreference and other linguistic phenomenon. See the project repo for full corpora license information. Annotations are licensed under CC-BY-4.0.
## Citation
```
@Article{Zeldes2017,
author = {Amir Zeldes},
title = {The {GUM} Corpus: Creating Multilayer Resources in the Classroom},
journal = {Language Resources and Evaluation},
year = {2017},
volume = {51},
number = {3},
pages = {581--612},
doi = {http://dx.doi.org/10.1007/s10579-016-9343-x}
}
@InProceedings{ZhuEtAl2021,
author = {Yilun Zhu and Sameer Pradhan and Amir Zeldes},
booktitle = {Proceedings of ACL-IJCNLP 2021},
title = {{OntoGUM}: Evaluating Contextualized {SOTA} Coreference Resolution on 12 More Genres},
year = {2021},
pages = {461--467},
address = {Bangkok, Thailand}
}
```
## Features
```python
{'coref_entities': [[{'eid': Value(dtype='string', id=None),
'eid_or_grp': Value(dtype='string', id=None),
'etype': Value(dtype='string', id=None),
'other': Value(dtype='string', id=None),
'sent_id': Value(dtype='string', id=None),
'span': Value(dtype='string', id=None)}]],
'doc_id': Value(dtype='string', id=None),
'ontogum_coref_chains': Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='int64',
id=None),
length=-1,
id=None),
length=-1,
id=None),
length=-1,
id=None),
'ontogum_sentences': [[{'deprel': Value(dtype='string', id=None),
'deps': Value(dtype='string', id=None),
'feats': Value(dtype='string', id=None),
'head': Value(dtype='int64', id=None),
'id': Value(dtype='int64', id=None),
'lemma': Value(dtype='string', id=None),
'misc': Value(dtype='string', id=None),
'text': Value(dtype='string', id=None),
'upos': Value(dtype='string', id=None),
'xpos': Value(dtype='string', id=None)}]],
'sentences': [{'comment': Value(dtype='string', id=None),
'conll_rows': [{'deprel': Value(dtype='string', id=None),
'deps': Value(dtype='string', id=None),
'feats': Value(dtype='string', id=None),
'head': Value(dtype='int64', id=None),
'id': Value(dtype='int64', id=None),
'lemma': Value(dtype='string', id=None),
'misc': Value(dtype='string', id=None),
'text': Value(dtype='string', id=None),
'upos': Value(dtype='string', id=None),
'xpos': Value(dtype='string', id=None)}],
'global_entity': Value(dtype='string', id=None),
'newdoc': Value(dtype='string', id=None),
'newpar': Value(dtype='bool', id=None),
'sent_id': Value(dtype='string', id=None),
'speaker': Value(dtype='string', id=None),
'text': Value(dtype='string', id=None),
'tokens': [{'coref_mentions': [{'eid': Value(dtype='string',
id=None),
'eid_or_grp': Value(dtype='string',
id=None),
'etype': Value(dtype='string',
id=None),
'other': {'centering': Value(dtype='string',
id=None),
'identity': Value(dtype='string',
id=None),
'infstat': Value(dtype='string',
id=None),
'link': Value(dtype='string',
id=None),
'minspan': Value(dtype='string',
id=None)},
'span': Value(dtype='string',
id=None)}],
'deprel': Value(dtype='string', id=None),
'feats': Value(dtype='string', id=None),
'form': Value(dtype='string', id=None),
'head': Value(dtype='int64', id=None),
'lemma': Value(dtype='string', id=None),
'misc': Value(dtype='string', id=None),
'ord': Value(dtype='float64', id=None),
'upos': Value(dtype='string', id=None),
'xpos': Value(dtype='string', id=None)}]}]}
``` | coref-data/gum_raw | [
"license:other",
"region:us"
] | 2024-01-11T19:05:28+00:00 | {"license": "other"} | 2024-01-21T04:44:12+00:00 | [] | [] | TAGS
#license-other #region-us
|
# GUM Corpus V9.2.0
- Project: URL
- Data source: URL
## Details
An English corpus annotated for coreference and other linguistic phenomenon. See the project repo for full corpora license information. Annotations are licensed under CC-BY-4.0.
## Features
| [
"# GUM Corpus V9.2.0\n\n- Project: URL\n- Data source: URL",
"## Details\n\nAn English corpus annotated for coreference and other linguistic phenomenon. See the project repo for full corpora license information. Annotations are licensed under CC-BY-4.0.",
"## Features"
] | [
"TAGS\n#license-other #region-us \n",
"# GUM Corpus V9.2.0\n\n- Project: URL\n- Data source: URL",
"## Details\n\nAn English corpus annotated for coreference and other linguistic phenomenon. See the project repo for full corpora license information. Annotations are licensed under CC-BY-4.0.",
"## Features"
] |
0cbfaf844ef0f42da5a62ab38a287ef2d278cd7d | # Dataset Card for kz919/open-orca-flan-50k-synthetic-reward-e5-mistral-7b-instruct
## Dataset Description
This data is based on [kz919/open-orca-flan-50k-synthetic-5-models](https://huggingface.co/datasets/kz919/open-orca-flan-50k-synthetic-5-models). [intfloat/e5-mistral-7b-instruct](https://huggingface.co/datasets/kz919/open-orca-flan-50k-synthetic-5-models) is used to generate the router label.
### Dataset Info
The dataset comprises the following features:
1. **prompt**: (string) - The initial prompt or query.
2. **completion**: (string) - The completed text or response.
3. **task**: (string) - Description of the task.
4. **ignos-Mistral-T5-7B-v1**: (string) - Responses from the ignos-Mistral-T5-7B-v1 model.
5. **cognAI-lil-c3po**: (string) - Responses from the cognAI-lil-c3po model.
6. **viethq188-Rabbit-7B-DPO-Chat**: (string) - Responses from the viethq188-Rabbit-7B-DPO-Chat model.
7. **cookinai-DonutLM-v1**: (string) - Responses from the cookinai-DonutLM-v1 model.
8. **v1olet-v1olet-merged-dpo-7B**: (string) - Responses from the v1olet-v1olet-merged-dpo-7B model.
9. **normalized_rewards**: (sequence of float32) - Normalized reward scores.
10. **router_label**: (int64) - Router labels.
### Splits
- **Train**:
- **num_bytes**: 105157970
- **num_examples**: 50000
### Size
- **Download Size**: 48532376
- **Dataset Size**: 105157970
## Configurations
- **Config Name**: default
- **Data Files**:
- **Train**:
- **Path**: data/train-*
## Task Categories
- Text Generation
## Language
- English (en)
| kz919/open-orca-flan-50k-synthetic-reward-e5-mistral-7b-instruct | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"region:us"
] | 2024-01-11T19:24:12+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation"], "pretty_name": "kz919/open-orca-flan-50k-synthetic-5-models labelled by intfloat/e5-mistral-7b-instruct", "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}, {"name": "task", "dtype": "string"}, {"name": "ignos-Mistral-T5-7B-v1", "dtype": "string"}, {"name": "cognAI-lil-c3po", "dtype": "string"}, {"name": "viethq188-Rabbit-7B-DPO-Chat", "dtype": "string"}, {"name": "cookinai-DonutLM-v1", "dtype": "string"}, {"name": "v1olet-v1olet-merged-dpo-7B", "dtype": "string"}, {"name": "normalized_rewards", "sequence": "float32"}, {"name": "router_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 105157970, "num_examples": 50000}], "download_size": 48532376, "dataset_size": 105157970}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-13T14:45:23+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #language-English #license-apache-2.0 #region-us
| # Dataset Card for kz919/open-orca-flan-50k-synthetic-reward-e5-mistral-7b-instruct
## Dataset Description
This data is based on kz919/open-orca-flan-50k-synthetic-5-models. intfloat/e5-mistral-7b-instruct is used to generate the router label.
### Dataset Info
The dataset comprises the following features:
1. prompt: (string) - The initial prompt or query.
2. completion: (string) - The completed text or response.
3. task: (string) - Description of the task.
4. ignos-Mistral-T5-7B-v1: (string) - Responses from the ignos-Mistral-T5-7B-v1 model.
5. cognAI-lil-c3po: (string) - Responses from the cognAI-lil-c3po model.
6. viethq188-Rabbit-7B-DPO-Chat: (string) - Responses from the viethq188-Rabbit-7B-DPO-Chat model.
7. cookinai-DonutLM-v1: (string) - Responses from the cookinai-DonutLM-v1 model.
8. v1olet-v1olet-merged-dpo-7B: (string) - Responses from the v1olet-v1olet-merged-dpo-7B model.
9. normalized_rewards: (sequence of float32) - Normalized reward scores.
10. router_label: (int64) - Router labels.
### Splits
- Train:
- num_bytes: 105157970
- num_examples: 50000
### Size
- Download Size: 48532376
- Dataset Size: 105157970
## Configurations
- Config Name: default
- Data Files:
- Train:
- Path: data/train-*
## Task Categories
- Text Generation
## Language
- English (en)
| [
"# Dataset Card for kz919/open-orca-flan-50k-synthetic-reward-e5-mistral-7b-instruct",
"## Dataset Description\n\nThis data is based on kz919/open-orca-flan-50k-synthetic-5-models. intfloat/e5-mistral-7b-instruct is used to generate the router label.",
"### Dataset Info\n\nThe dataset comprises the following features:\n\n1. prompt: (string) - The initial prompt or query.\n2. completion: (string) - The completed text or response.\n3. task: (string) - Description of the task.\n4. ignos-Mistral-T5-7B-v1: (string) - Responses from the ignos-Mistral-T5-7B-v1 model.\n5. cognAI-lil-c3po: (string) - Responses from the cognAI-lil-c3po model.\n6. viethq188-Rabbit-7B-DPO-Chat: (string) - Responses from the viethq188-Rabbit-7B-DPO-Chat model.\n7. cookinai-DonutLM-v1: (string) - Responses from the cookinai-DonutLM-v1 model.\n8. v1olet-v1olet-merged-dpo-7B: (string) - Responses from the v1olet-v1olet-merged-dpo-7B model.\n9. normalized_rewards: (sequence of float32) - Normalized reward scores.\n10. router_label: (int64) - Router labels.",
"### Splits\n\n- Train: \n - num_bytes: 105157970\n - num_examples: 50000",
"### Size\n\n- Download Size: 48532376\n- Dataset Size: 105157970",
"## Configurations\n\n- Config Name: default\n- Data Files:\n - Train:\n - Path: data/train-*",
"## Task Categories\n\n- Text Generation",
"## Language\n\n- English (en)"
] | [
"TAGS\n#task_categories-text-generation #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for kz919/open-orca-flan-50k-synthetic-reward-e5-mistral-7b-instruct",
"## Dataset Description\n\nThis data is based on kz919/open-orca-flan-50k-synthetic-5-models. intfloat/e5-mistral-7b-instruct is used to generate the router label.",
"### Dataset Info\n\nThe dataset comprises the following features:\n\n1. prompt: (string) - The initial prompt or query.\n2. completion: (string) - The completed text or response.\n3. task: (string) - Description of the task.\n4. ignos-Mistral-T5-7B-v1: (string) - Responses from the ignos-Mistral-T5-7B-v1 model.\n5. cognAI-lil-c3po: (string) - Responses from the cognAI-lil-c3po model.\n6. viethq188-Rabbit-7B-DPO-Chat: (string) - Responses from the viethq188-Rabbit-7B-DPO-Chat model.\n7. cookinai-DonutLM-v1: (string) - Responses from the cookinai-DonutLM-v1 model.\n8. v1olet-v1olet-merged-dpo-7B: (string) - Responses from the v1olet-v1olet-merged-dpo-7B model.\n9. normalized_rewards: (sequence of float32) - Normalized reward scores.\n10. router_label: (int64) - Router labels.",
"### Splits\n\n- Train: \n - num_bytes: 105157970\n - num_examples: 50000",
"### Size\n\n- Download Size: 48532376\n- Dataset Size: 105157970",
"## Configurations\n\n- Config Name: default\n- Data Files:\n - Train:\n - Path: data/train-*",
"## Task Categories\n\n- Text Generation",
"## Language\n\n- English (en)"
] |
48adfc6c8f6cab83872f5681760036a32049bd87 |
# CorefUD v1.1
- Project: https://ufal.mff.cuni.cz/corefud
- Data source: https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-5053
## Details
From https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-5053:
CorefUD is a collection of previously existing datasets annotated with coreference, which we converted into a common annotation scheme. In total, CorefUD in its current version 1.1 consists of 21 datasets for 13 languages. The datasets are enriched with automatic morphological and syntactic annotations that are fully compliant with the standards of the Universal Dependencies project. All the datasets are stored in the CoNLL-U format, with coreference- and bridging-specific information captured by attribute-value pairs located in the MISC column. The collection is divided into a public edition and a non-public (ÚFAL-internal) edition. The publicly available edition is distributed via LINDAT-CLARIAH-CZ and contains 17 datasets for 12 languages (1 dataset for Catalan, 2 for Czech, 2 for English, 1 for French, 2 for German, 2 for Hungarian, 1 for Lithuanian, 2 for Norwegian, 1 for Polish, 1 for Russian, 1 for Spanish, and 1 for Turkish), excluding the test data. The non-public edition is available internally to ÚFAL members and contains additional 4 datasets for 2 languages (1 dataset for Dutch, and 3 for English), which we are not allowed to distribute due to their original license limitations. It also contains the test data portions for all datasets. When using any of the harmonized datasets, please get acquainted with its license (placed in the same directory as the data) and cite the original data resource too. Compared to the previous version 1.0, the version 1.1 comprises new languages and corpora, namely Hungarian-KorKor, Norwegian-BokmaalNARC, Norwegian-NynorskNARC, and Turkish-ITCC. In addition, the English GUM dataset has been updated to a newer and larger version, and the conversion pipelines for most datasets have been refined (a list of all changes in each dataset can be found in the corresponding README file).
## Licenses
1. Catalan-AnCora: The treebank is licensed under the Creative Commons License Attribution 4.0 International. The complete license text is available at:
https://creativecommons.org/licenses/by/4.0/legalcode
1. Czech-PCEDT: Attribution-NonCommercial-ShareAlike 3.0 Unported
1. Czech-PDT: Attribution-NonCommercial-ShareAlike 4.0 International
1. GUM: Anootations are Creative Commons Attribution (CC-BY) version 4.0 Note: reddit data is excluded from CorefUD due to licensing:
1. Wikinews/interviews: http://creativecommons.org/licenses/by/2.5/ (Source: https://en.wikinews.org/wiki/Wikinews:Copyright)
1. WikiVoyage: https://creativecommons.org/licenses/by-sa/3.0/ (Source: https://wikimediafoundation.org/wiki/Terms_of_Use)
1. WikiHow: http://creativecommons.org/licenses/by-nc-sa/3.0/ (Source: http://www.wikihow.com/wikiHow:Creative-Commons)
1. Academic: Multiple sources, all https://creativecommons.org/licenses/by/4.0/
1. Biographies: http://creativecommons.org/licenses/by-sa/3.0/ (Source: https://en.wikipedia.org/wiki/Wikipedia:Copyrights)
1. Fiction: http://creativecommons.org/licenses/by-nc-sa/3.0/ (Source: http://smallbeerpress.com/creative-commons/)
1. English-ParCorFull: Attribution-NonCommercial 4.0 International
1. French-Democrat: Attribution-ShareAlike 4.0 International
1. German-ParCorFull: Attribution-NonCommercial 4.0 International
1. German-PotsdamCC: Attribution-NonCommercial-ShareAlike 4.0 International
1. Hungarian-KorKor: Creative Commons Attribution 4.0 International Public License
1. Hungarian-SzegedKoref: Attribution 4.0 International
1. Lithuanian-LCC: CLARIN-LT PUBLIC END-USER LICENCE (PUB)
1. Norwegian-BokmaaINARC: Attribution-ShareAlike 4.0 International
1. Norwegian-NynorskNARC: Attribution-ShareAlike 4.0 International
1. Polish-PCC: CC Attribution 3.0 Unported
1. Russian-RuCor: Attribution-ShareAlike 4.0 International
1. Spanish-AnCora: Creative Commons License Attribution 4.0 International
1. Turkish-ITCC: Attribution-NonCommercial-ShareAlike 4.0 International
## Citation
```
@misc{11234/1-5053,
title = {Coreference in Universal Dependencies 1.1 ({CorefUD} 1.1)},
author = {Nov{\'a}k, Michal and Popel, Martin and {\v Z}abokrtsk{\'y}, Zden{\v e}k and Zeman, Daniel and Nedoluzhko, Anna and Acar, Kutay and Bourgonje, Peter and Cinkov{\'a}, Silvie and Cebiro{\u g}lu Eryi{\u g}it, G{\"u}l{\c s}en and Haji{\v c}, Jan and Hardmeier, Christian and Haug, Dag and J{\o}rgensen, Tollef and K{\aa}sen, Andre and Krielke, Pauline and Landragin, Fr{\'e}d{\'e}ric and Lapshinova-Koltunski, Ekaterina and M{\ae}hlum, Petter and Mart{\'{\i}}, M. Ant{\`o}nia and Mikulov{\'a}, Marie and N{\o}klestad, Anders and Ogrodniczuk, Maciej and {\O}vrelid, Lilja and Pamay Arslan, Tu{\u g}ba and Recasens, Marta and Solberg, Per Erik and Stede, Manfred and Straka, Milan and Toldova, Svetlana and Vad{\'a}sz, No{\'e}mi and Velldal, Erik and Vincze, Veronika and Zeldes, Amir and {\v Z}itkus, Voldemaras},
url = {http://hdl.handle.net/11234/1-5053},
note = {{LINDAT}/{CLARIAH}-{CZ} digital library at the Institute of Formal and Applied Linguistics ({{\'U}FAL}), Faculty of Mathematics and Physics, Charles University},
copyright = {Licence {CorefUD} v1.1},
year = {2023} }
``` | coref-data/corefud_raw | [
"license:other",
"region:us"
] | 2024-01-11T19:41:35+00:00 | {"license": "other", "configs": [{"config_name": "ca_ancora-corefud", "data_files": [{"split": "train", "path": "ca_ancora-corefud/train-*"}, {"split": "validation", "path": "ca_ancora-corefud/validation-*"}]}, {"config_name": "cs_pcedt-corefud", "data_files": [{"split": "train", "path": "cs_pcedt-corefud/train-*"}, {"split": "validation", "path": "cs_pcedt-corefud/validation-*"}]}, {"config_name": "cs_pdt-corefud", "data_files": [{"split": "train", "path": "cs_pdt-corefud/train-*"}, {"split": "validation", "path": "cs_pdt-corefud/validation-*"}]}, {"config_name": "de_parcorfull-corefud", "data_files": [{"split": "train", "path": "de_parcorfull-corefud/train-*"}, {"split": "validation", "path": "de_parcorfull-corefud/validation-*"}]}, {"config_name": "de_potsdamcc-corefud", "data_files": [{"split": "train", "path": "de_potsdamcc-corefud/train-*"}, {"split": "validation", "path": "de_potsdamcc-corefud/validation-*"}]}, {"config_name": "en_gum-corefud", "data_files": [{"split": "train", "path": "en_gum-corefud/train-*"}, {"split": "validation", "path": "en_gum-corefud/validation-*"}]}, {"config_name": "en_parcorfull-corefud", "data_files": [{"split": "train", "path": "en_parcorfull-corefud/train-*"}, {"split": "validation", "path": "en_parcorfull-corefud/validation-*"}]}, {"config_name": "es_ancora-corefud", "data_files": [{"split": "train", "path": "es_ancora-corefud/train-*"}, {"split": "validation", "path": "es_ancora-corefud/validation-*"}]}, {"config_name": "fr_democrat-corefud", "data_files": [{"split": "train", "path": "fr_democrat-corefud/train-*"}, {"split": "validation", "path": "fr_democrat-corefud/validation-*"}]}, {"config_name": "hu_korkor-corefud", "data_files": [{"split": "train", "path": "hu_korkor-corefud/train-*"}, {"split": "validation", "path": "hu_korkor-corefud/validation-*"}]}, {"config_name": "hu_szegedkoref-corefud", "data_files": [{"split": "train", "path": "hu_szegedkoref-corefud/train-*"}, {"split": "validation", "path": "hu_szegedkoref-corefud/validation-*"}]}, {"config_name": "lt_lcc-corefud", "data_files": [{"split": "train", "path": "lt_lcc-corefud/train-*"}, {"split": "validation", "path": "lt_lcc-corefud/validation-*"}]}, {"config_name": "no_bokmaalnarc-corefud", "data_files": [{"split": "train", "path": "no_bokmaalnarc-corefud/train-*"}, {"split": "validation", "path": "no_bokmaalnarc-corefud/validation-*"}]}, {"config_name": "no_nynorsknarc-corefud", "data_files": [{"split": "train", "path": "no_nynorsknarc-corefud/train-*"}, {"split": "validation", "path": "no_nynorsknarc-corefud/validation-*"}]}, {"config_name": "pl_pcc-corefud", "data_files": [{"split": "train", "path": "pl_pcc-corefud/train-*"}, {"split": "validation", "path": "pl_pcc-corefud/validation-*"}]}, {"config_name": "ru_rucor-corefud", "data_files": [{"split": "train", "path": "ru_rucor-corefud/train-*"}, {"split": "validation", "path": "ru_rucor-corefud/validation-*"}]}, {"config_name": "tr_itcc-corefud", "data_files": [{"split": "train", "path": "tr_itcc-corefud/train-*"}, {"split": "validation", "path": "tr_itcc-corefud/validation-*"}]}], "dataset_info": [{"config_name": "ca_ancora-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "float64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 55414823, "num_examples": 1011}, {"name": "validation", "num_bytes": 8272249, "num_examples": 131}], "download_size": 9176152, "dataset_size": 63687072}, {"config_name": "cs_pcedt-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "float64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 181873031, "num_examples": 1875}, {"name": "validation", "num_bytes": 31760999, "num_examples": 337}], "download_size": 34234633, "dataset_size": 213634030}, {"config_name": "cs_pdt-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "float64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 139395037, "num_examples": 2533}, {"name": "validation", "num_bytes": 19218699, "num_examples": 316}], "download_size": 28371405, "dataset_size": 158613736}, {"config_name": "de_parcorfull-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "int64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 1258403, "num_examples": 15}, {"name": "validation", "num_bytes": 157231, "num_examples": 2}], "download_size": 257396, "dataset_size": 1415634}, {"config_name": "de_potsdamcc-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "int64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 7858720, "num_examples": 142}, {"name": "validation", "num_bytes": 989705, "num_examples": 17}], "download_size": 1061886, "dataset_size": 8848425}, {"config_name": "en_gum-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "bool"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "float64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 35327836, "num_examples": 151}, {"name": "validation", "num_bytes": 4651251, "num_examples": 22}], "download_size": 6458421, "dataset_size": 39979087}, {"config_name": "en_parcorfull-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "int64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 1126075, "num_examples": 15}, {"name": "validation", "num_bytes": 142079, "num_examples": 2}], "download_size": 240468, "dataset_size": 1268154}, {"config_name": "es_ancora-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "float64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 63472833, "num_examples": 1080}, {"name": "validation", "num_bytes": 8035460, "num_examples": 131}], "download_size": 10216005, "dataset_size": 71508293}, {"config_name": "fr_democrat-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "int64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "null"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 31612977, "num_examples": 50}, {"name": "validation", "num_bytes": 4049114, "num_examples": 46}], "download_size": 6646264, "dataset_size": 35662091}, {"config_name": "hu_korkor-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "float64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 2644257, "num_examples": 76}, {"name": "validation", "num_bytes": 348293, "num_examples": 9}], "download_size": 656442, "dataset_size": 2992550}, {"config_name": "hu_szegedkoref-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "float64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 13240965, "num_examples": 320}, {"name": "validation", "num_bytes": 1564544, "num_examples": 40}], "download_size": 2612405, "dataset_size": 14805509}, {"config_name": "lt_lcc-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "int64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 4163012, "num_examples": 80}, {"name": "validation", "num_bytes": 466088, "num_examples": 10}], "download_size": 786730, "dataset_size": 4629100}, {"config_name": "no_bokmaalnarc-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "bool"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "int64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "null"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 29695334, "num_examples": 284}, {"name": "validation", "num_bytes": 3173875, "num_examples": 31}], "download_size": 6903646, "dataset_size": 32869209}, {"config_name": "no_nynorsknarc-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "bool"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "int64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "null"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 25271842, "num_examples": 336}, {"name": "validation", "num_bytes": 2565383, "num_examples": 28}], "download_size": 5882279, "dataset_size": 27837225}, {"config_name": "pl_pcc-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "float64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 110387906, "num_examples": 1463}, {"name": "validation", "num_bytes": 14085779, "num_examples": 183}], "download_size": 25508543, "dataset_size": 124473685}, {"config_name": "ru_rucor-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "null"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "int64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "null"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 16813596, "num_examples": 145}, {"name": "validation", "num_bytes": 2929201, "num_examples": 18}], "download_size": 3388005, "dataset_size": 19742797}, {"config_name": "tr_itcc-corefud", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "sentences", "list": [{"name": "comment", "dtype": "string"}, {"name": "global_entity", "dtype": "string"}, {"name": "newdoc", "dtype": "string"}, {"name": "newpar", "dtype": "bool"}, {"name": "sent_id", "dtype": "string"}, {"name": "speaker", "dtype": "null"}, {"name": "text", "dtype": "string"}, {"name": "tokens", "list": [{"name": "coref_mentions", "list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "span", "dtype": "string"}]}, {"name": "deprel", "dtype": "string"}, {"name": "feats", "dtype": "string"}, {"name": "form", "dtype": "string"}, {"name": "head", "dtype": "int64"}, {"name": "lemma", "dtype": "string"}, {"name": "misc", "dtype": "string"}, {"name": "ord", "dtype": "int64"}, {"name": "upos", "dtype": "string"}, {"name": "xpos", "dtype": "string"}]}]}, {"name": "coref_entities", "list": {"list": [{"name": "eid", "dtype": "string"}, {"name": "eid_or_grp", "dtype": "string"}, {"name": "etype", "dtype": "string"}, {"name": "other", "dtype": "string"}, {"name": "sent_id", "dtype": "string"}, {"name": "span", "dtype": "string"}]}}], "splits": [{"name": "train", "num_bytes": 5802124, "num_examples": 19}, {"name": "validation", "num_bytes": 675299, "num_examples": 2}], "download_size": 1095019, "dataset_size": 6477423}]} | 2024-01-21T05:35:30+00:00 | [] | [] | TAGS
#license-other #region-us
|
# CorefUD v1.1
- Project: URL
- Data source: URL
## Details
From URL
CorefUD is a collection of previously existing datasets annotated with coreference, which we converted into a common annotation scheme. In total, CorefUD in its current version 1.1 consists of 21 datasets for 13 languages. The datasets are enriched with automatic morphological and syntactic annotations that are fully compliant with the standards of the Universal Dependencies project. All the datasets are stored in the CoNLL-U format, with coreference- and bridging-specific information captured by attribute-value pairs located in the MISC column. The collection is divided into a public edition and a non-public (ÚFAL-internal) edition. The publicly available edition is distributed via LINDAT-CLARIAH-CZ and contains 17 datasets for 12 languages (1 dataset for Catalan, 2 for Czech, 2 for English, 1 for French, 2 for German, 2 for Hungarian, 1 for Lithuanian, 2 for Norwegian, 1 for Polish, 1 for Russian, 1 for Spanish, and 1 for Turkish), excluding the test data. The non-public edition is available internally to ÚFAL members and contains additional 4 datasets for 2 languages (1 dataset for Dutch, and 3 for English), which we are not allowed to distribute due to their original license limitations. It also contains the test data portions for all datasets. When using any of the harmonized datasets, please get acquainted with its license (placed in the same directory as the data) and cite the original data resource too. Compared to the previous version 1.0, the version 1.1 comprises new languages and corpora, namely Hungarian-KorKor, Norwegian-BokmaalNARC, Norwegian-NynorskNARC, and Turkish-ITCC. In addition, the English GUM dataset has been updated to a newer and larger version, and the conversion pipelines for most datasets have been refined (a list of all changes in each dataset can be found in the corresponding README file).
## Licenses
1. Catalan-AnCora: The treebank is licensed under the Creative Commons License Attribution 4.0 International. The complete license text is available at:
URL
1. Czech-PCEDT: Attribution-NonCommercial-ShareAlike 3.0 Unported
1. Czech-PDT: Attribution-NonCommercial-ShareAlike 4.0 International
1. GUM: Anootations are Creative Commons Attribution (CC-BY) version 4.0 Note: reddit data is excluded from CorefUD due to licensing:
1. Wikinews/interviews: URL (Source: URL
1. WikiVoyage: URL (Source: URL
1. WikiHow: URL (Source: URL
1. Academic: Multiple sources, all URL
1. Biographies: URL (Source: URL
1. Fiction: URL (Source: URL
1. English-ParCorFull: Attribution-NonCommercial 4.0 International
1. French-Democrat: Attribution-ShareAlike 4.0 International
1. German-ParCorFull: Attribution-NonCommercial 4.0 International
1. German-PotsdamCC: Attribution-NonCommercial-ShareAlike 4.0 International
1. Hungarian-KorKor: Creative Commons Attribution 4.0 International Public License
1. Hungarian-SzegedKoref: Attribution 4.0 International
1. Lithuanian-LCC: CLARIN-LT PUBLIC END-USER LICENCE (PUB)
1. Norwegian-BokmaaINARC: Attribution-ShareAlike 4.0 International
1. Norwegian-NynorskNARC: Attribution-ShareAlike 4.0 International
1. Polish-PCC: CC Attribution 3.0 Unported
1. Russian-RuCor: Attribution-ShareAlike 4.0 International
1. Spanish-AnCora: Creative Commons License Attribution 4.0 International
1. Turkish-ITCC: Attribution-NonCommercial-ShareAlike 4.0 International
| [
"# CorefUD v1.1\n\n- Project: URL\n- Data source: URL",
"## Details\n\nFrom URL\n\nCorefUD is a collection of previously existing datasets annotated with coreference, which we converted into a common annotation scheme. In total, CorefUD in its current version 1.1 consists of 21 datasets for 13 languages. The datasets are enriched with automatic morphological and syntactic annotations that are fully compliant with the standards of the Universal Dependencies project. All the datasets are stored in the CoNLL-U format, with coreference- and bridging-specific information captured by attribute-value pairs located in the MISC column. The collection is divided into a public edition and a non-public (ÚFAL-internal) edition. The publicly available edition is distributed via LINDAT-CLARIAH-CZ and contains 17 datasets for 12 languages (1 dataset for Catalan, 2 for Czech, 2 for English, 1 for French, 2 for German, 2 for Hungarian, 1 for Lithuanian, 2 for Norwegian, 1 for Polish, 1 for Russian, 1 for Spanish, and 1 for Turkish), excluding the test data. The non-public edition is available internally to ÚFAL members and contains additional 4 datasets for 2 languages (1 dataset for Dutch, and 3 for English), which we are not allowed to distribute due to their original license limitations. It also contains the test data portions for all datasets. When using any of the harmonized datasets, please get acquainted with its license (placed in the same directory as the data) and cite the original data resource too. Compared to the previous version 1.0, the version 1.1 comprises new languages and corpora, namely Hungarian-KorKor, Norwegian-BokmaalNARC, Norwegian-NynorskNARC, and Turkish-ITCC. In addition, the English GUM dataset has been updated to a newer and larger version, and the conversion pipelines for most datasets have been refined (a list of all changes in each dataset can be found in the corresponding README file).",
"## Licenses\n\n1. Catalan-AnCora: The treebank is licensed under the Creative Commons License Attribution 4.0 International. The complete license text is available at:\nURL\n1. Czech-PCEDT: Attribution-NonCommercial-ShareAlike 3.0 Unported\n1. Czech-PDT: Attribution-NonCommercial-ShareAlike 4.0 International\n1. GUM: Anootations are Creative Commons Attribution (CC-BY) version 4.0 Note: reddit data is excluded from CorefUD due to licensing:\n 1. Wikinews/interviews: URL (Source: URL\n 1. WikiVoyage: URL (Source: URL\n 1. WikiHow: URL (Source: URL\n 1. Academic: Multiple sources, all URL\n 1. Biographies: URL (Source: URL\n 1. Fiction: URL (Source: URL\n1. English-ParCorFull: Attribution-NonCommercial 4.0 International\n1. French-Democrat: Attribution-ShareAlike 4.0 International\n1. German-ParCorFull: Attribution-NonCommercial 4.0 International\n1. German-PotsdamCC: Attribution-NonCommercial-ShareAlike 4.0 International\n1. Hungarian-KorKor: Creative Commons Attribution 4.0 International Public License\n1. Hungarian-SzegedKoref: Attribution 4.0 International\n1. Lithuanian-LCC: CLARIN-LT PUBLIC END-USER LICENCE (PUB)\n1. Norwegian-BokmaaINARC: Attribution-ShareAlike 4.0 International\n1. Norwegian-NynorskNARC: Attribution-ShareAlike 4.0 International\n1. Polish-PCC: CC Attribution 3.0 Unported\n1. Russian-RuCor: Attribution-ShareAlike 4.0 International\n1. Spanish-AnCora: Creative Commons License Attribution 4.0 International\n1. Turkish-ITCC: Attribution-NonCommercial-ShareAlike 4.0 International"
] | [
"TAGS\n#license-other #region-us \n",
"# CorefUD v1.1\n\n- Project: URL\n- Data source: URL",
"## Details\n\nFrom URL\n\nCorefUD is a collection of previously existing datasets annotated with coreference, which we converted into a common annotation scheme. In total, CorefUD in its current version 1.1 consists of 21 datasets for 13 languages. The datasets are enriched with automatic morphological and syntactic annotations that are fully compliant with the standards of the Universal Dependencies project. All the datasets are stored in the CoNLL-U format, with coreference- and bridging-specific information captured by attribute-value pairs located in the MISC column. The collection is divided into a public edition and a non-public (ÚFAL-internal) edition. The publicly available edition is distributed via LINDAT-CLARIAH-CZ and contains 17 datasets for 12 languages (1 dataset for Catalan, 2 for Czech, 2 for English, 1 for French, 2 for German, 2 for Hungarian, 1 for Lithuanian, 2 for Norwegian, 1 for Polish, 1 for Russian, 1 for Spanish, and 1 for Turkish), excluding the test data. The non-public edition is available internally to ÚFAL members and contains additional 4 datasets for 2 languages (1 dataset for Dutch, and 3 for English), which we are not allowed to distribute due to their original license limitations. It also contains the test data portions for all datasets. When using any of the harmonized datasets, please get acquainted with its license (placed in the same directory as the data) and cite the original data resource too. Compared to the previous version 1.0, the version 1.1 comprises new languages and corpora, namely Hungarian-KorKor, Norwegian-BokmaalNARC, Norwegian-NynorskNARC, and Turkish-ITCC. In addition, the English GUM dataset has been updated to a newer and larger version, and the conversion pipelines for most datasets have been refined (a list of all changes in each dataset can be found in the corresponding README file).",
"## Licenses\n\n1. Catalan-AnCora: The treebank is licensed under the Creative Commons License Attribution 4.0 International. The complete license text is available at:\nURL\n1. Czech-PCEDT: Attribution-NonCommercial-ShareAlike 3.0 Unported\n1. Czech-PDT: Attribution-NonCommercial-ShareAlike 4.0 International\n1. GUM: Anootations are Creative Commons Attribution (CC-BY) version 4.0 Note: reddit data is excluded from CorefUD due to licensing:\n 1. Wikinews/interviews: URL (Source: URL\n 1. WikiVoyage: URL (Source: URL\n 1. WikiHow: URL (Source: URL\n 1. Academic: Multiple sources, all URL\n 1. Biographies: URL (Source: URL\n 1. Fiction: URL (Source: URL\n1. English-ParCorFull: Attribution-NonCommercial 4.0 International\n1. French-Democrat: Attribution-ShareAlike 4.0 International\n1. German-ParCorFull: Attribution-NonCommercial 4.0 International\n1. German-PotsdamCC: Attribution-NonCommercial-ShareAlike 4.0 International\n1. Hungarian-KorKor: Creative Commons Attribution 4.0 International Public License\n1. Hungarian-SzegedKoref: Attribution 4.0 International\n1. Lithuanian-LCC: CLARIN-LT PUBLIC END-USER LICENCE (PUB)\n1. Norwegian-BokmaaINARC: Attribution-ShareAlike 4.0 International\n1. Norwegian-NynorskNARC: Attribution-ShareAlike 4.0 International\n1. Polish-PCC: CC Attribution 3.0 Unported\n1. Russian-RuCor: Attribution-ShareAlike 4.0 International\n1. Spanish-AnCora: Creative Commons License Attribution 4.0 International\n1. Turkish-ITCC: Attribution-NonCommercial-ShareAlike 4.0 International"
] |
1dada8c7a0066a79d2b87b6b491daa777367c458 |
# The VoxTube Dataset
The [VoxTube](https://idrnd.github.io/VoxTube) is a multilingual speaker recognition dataset collected from the **CC BY 4.0** YouTube videos. It includes 5.040 speaker identities pronouncing ~4M utterances in 10+ languages. For the underlying data collection and filtering approach details please refer to [[1]](#citation).
## Dataset Structure
### Data Instances
A typical data point comprises the audio signal iself, with additional labels like speaker id / session id (*video_id*) / language / gender etc.
```
{'upload_date': datetime.date(2018, 5, 2),
'segment_id': 11,
'video_id': 'vIpK78CL1so',
'channel_id': 'UC7rMVNUr7318I0MKumPbIKA',
'language': 'english',
'gender': 'male',
'spk_id': 684,
'spk_estim_age': 23.5572452545166,
'spk_estim_age_mae': 3.6162896156311035,
'audio': {'path': 'UC7rMVNUr7318I0MKumPbIKA/vIpK78CL1so/segment_11.mp3',
'array': array([-0.00986903, -0.01569703, -0.02005875, ..., -0.00247505,
-0.01329966, -0.01462782]),
'sampling_rate': 16000}}
```
### Data Fields
- **channel_id**: YouTube channel ID from which speaker ID (`spk_id`) is derived.
- **video_id**: YouTube video ID, or session for speaker.
- **segment_id**: ID of chunk of video's audio, that passed filtration process.
- **upload_date**: Date time object representing the date when video was uploaded to YouTube.
- **language**: Language of the channel / speaker.
- **gender**: Gender of the channel / speaker.
- **spk_id**: Infered integer speaker ID from **channel_id**.
- **spk_estim_age**: Label of speaker age (not accurate) based on voice-based automatic age estimation & calibrated based on the upload_date of all videos for a given channel.
- **spk_estim_age_mae**: MAE of **spk_estim_age** (might be considered as confidence).
- **audio**: audio signal of a 4 seconds *mp3* segment from **channel_id/video_id**
## Dataset description
### Main statistics
| Dataset properties | Stats |
|:-----------------------------|:----------|
| # of POI | 5.040 |
| # of videos | 306.248 |
| # of segments | 4.439.888 |
| # of hours | 4.933 |
| Avg # of videos per POI | 61 |
| Avg # of segments per POI | 881 |
| Avg length of segments (sec) | 4 |
### Language and gender distributions

Language and gender labels of each speaker are available in original repo [here](https://github.com/IDRnD/VoxTube/blob/main/resources/language_gender_meta.csv).
## License
The dataset is licensed under **CC BY-NC-SA 4.0**, please see the complete version of the [license](LICENSE).
Please also note that the provided metadata is relevant on the February 2023 and the corresponding CC BY 4.0 video licenses are valid on that date. ID R&D Inc. is not responsible for changed video license type or if the video was deleted from the YouTube platform. If you want your channel meta to be deleted from the dataset, please [contact ID R&D Inc.](https://www.idrnd.ai/contact-us) with a topic *"VoxTube change request"*.
## Development
Official repository [live repository](https://github.com/IDRnD/VoxTube) for opening issues.
## Citation
Please cite the paper below if you make use of the dataset:
```
@inproceedings{yakovlev23_interspeech,
author={Ivan Yakovlev and Anton Okhotnikov and Nikita Torgashov and Rostislav Makarov and Yuri Voevodin and Konstantin Simonchik},
title={{VoxTube: a multilingual speaker recognition dataset}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
pages={2238--2242},
doi={10.21437/Interspeech.2023-1083}
}
``````` | voice-is-cool/voxtube | [
"task_categories:audio-classification",
"size_categories:1M<n<10M",
"language:en",
"language:ru",
"language:es",
"language:pt",
"language:fr",
"language:ar",
"language:it",
"language:de",
"language:tr",
"language:nl",
"language:ko",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2024-01-11T20:08:17+00:00 | {"language": ["en", "ru", "es", "pt", "fr", "ar", "it", "de", "tr", "nl", "ko"], "license": "cc-by-nc-sa-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["audio-classification"], "pretty_name": "VoxTube", "dataset_info": {"homepage": "https://idrnd.github.io/VoxTube/", "description": "VoxTube - a multilingual speaker recognition dataset", "license": "CC-BY-NC-SA-4.0", "citation": "@inproceedings{yakovlev23_interspeech, author={Ivan Yakovlev and Anton Okhotnikov and Nikita Torgashov and Rostislav Makarov and Yuri Voevodin and Konstantin Simonchik}, title={{VoxTube: a multilingual speaker recognition dataset}}, year=2023, booktitle={Proc. INTERSPEECH 2023}, pages={2238--2242}, doi={10.21437/Interspeech.2023-1083} }", "features": [{"name": "upload_date", "dtype": "date32"}, {"name": "segment_id", "dtype": "int32"}, {"name": "video_id", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "spk_id", "dtype": "int32"}, {"name": "spk_estim_age", "dtype": "float32"}, {"name": "spk_estim_age_mae", "dtype": "float32"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}], "splits": [{"name": "train", "num_bytes": 222149986832.446, "num_examples": 4459754}], "download_size": 220167447157, "dataset_size": 222149986832.446}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "extra_gated_fields": {"Name": "text", "Affiliation": "text", "Email": "text", "I understand the applicability and accept the limitations of CC-BY-NC-SA license of this dataset that NO commercial usage is allowed": "checkbox", "By clicking on \"Access repository\" below, I agree to not attempt to determine the identity of speakers in the dataset": "checkbox"}} | 2024-02-13T20:28:10+00:00 | [] | [
"en",
"ru",
"es",
"pt",
"fr",
"ar",
"it",
"de",
"tr",
"nl",
"ko"
] | TAGS
#task_categories-audio-classification #size_categories-1M<n<10M #language-English #language-Russian #language-Spanish #language-Portuguese #language-French #language-Arabic #language-Italian #language-German #language-Turkish #language-Dutch #language-Korean #license-cc-by-nc-sa-4.0 #region-us
| The VoxTube Dataset
===================
The VoxTube is a multilingual speaker recognition dataset collected from the CC BY 4.0 YouTube videos. It includes 5.040 speaker identities pronouncing ~4M utterances in 10+ languages. For the underlying data collection and filtering approach details please refer to [[1]](#citation).
Dataset Structure
-----------------
### Data Instances
A typical data point comprises the audio signal iself, with additional labels like speaker id / session id (*video\_id*) / language / gender etc.
### Data Fields
* channel\_id: YouTube channel ID from which speaker ID ('spk\_id') is derived.
* video\_id: YouTube video ID, or session for speaker.
* segment\_id: ID of chunk of video's audio, that passed filtration process.
* upload\_date: Date time object representing the date when video was uploaded to YouTube.
* language: Language of the channel / speaker.
* gender: Gender of the channel / speaker.
* spk\_id: Infered integer speaker ID from channel\_id.
* spk\_estim\_age: Label of speaker age (not accurate) based on voice-based automatic age estimation & calibrated based on the upload\_date of all videos for a given channel.
* spk\_estim\_age\_mae: MAE of spk\_estim\_age (might be considered as confidence).
* audio: audio signal of a 4 seconds *mp3* segment from channel\_id/video\_id
Dataset description
-------------------
### Main statistics
### Language and gender distributions
!Distributions
Language and gender labels of each speaker are available in original repo here.
License
-------
The dataset is licensed under CC BY-NC-SA 4.0, please see the complete version of the license.
Please also note that the provided metadata is relevant on the February 2023 and the corresponding CC BY 4.0 video licenses are valid on that date. ID R&D Inc. is not responsible for changed video license type or if the video was deleted from the YouTube platform. If you want your channel meta to be deleted from the dataset, please contact ID R&D Inc. with a topic *"VoxTube change request"*.
Development
-----------
Official repository live repository for opening issues.
Please cite the paper below if you make use of the dataset:
''''
| [
"### Data Instances\n\n\nA typical data point comprises the audio signal iself, with additional labels like speaker id / session id (*video\\_id*) / language / gender etc.",
"### Data Fields\n\n\n* channel\\_id: YouTube channel ID from which speaker ID ('spk\\_id') is derived.\n* video\\_id: YouTube video ID, or session for speaker.\n* segment\\_id: ID of chunk of video's audio, that passed filtration process.\n* upload\\_date: Date time object representing the date when video was uploaded to YouTube.\n* language: Language of the channel / speaker.\n* gender: Gender of the channel / speaker.\n* spk\\_id: Infered integer speaker ID from channel\\_id.\n* spk\\_estim\\_age: Label of speaker age (not accurate) based on voice-based automatic age estimation & calibrated based on the upload\\_date of all videos for a given channel.\n* spk\\_estim\\_age\\_mae: MAE of spk\\_estim\\_age (might be considered as confidence).\n* audio: audio signal of a 4 seconds *mp3* segment from channel\\_id/video\\_id\n\n\nDataset description\n-------------------",
"### Main statistics",
"### Language and gender distributions\n\n\n!Distributions\n\n\nLanguage and gender labels of each speaker are available in original repo here.\n\n\nLicense\n-------\n\n\nThe dataset is licensed under CC BY-NC-SA 4.0, please see the complete version of the license.\n\n\nPlease also note that the provided metadata is relevant on the February 2023 and the corresponding CC BY 4.0 video licenses are valid on that date. ID R&D Inc. is not responsible for changed video license type or if the video was deleted from the YouTube platform. If you want your channel meta to be deleted from the dataset, please contact ID R&D Inc. with a topic *\"VoxTube change request\"*.\n\n\nDevelopment\n-----------\n\n\nOfficial repository live repository for opening issues.\n\n\nPlease cite the paper below if you make use of the dataset:\n\n\n''''"
] | [
"TAGS\n#task_categories-audio-classification #size_categories-1M<n<10M #language-English #language-Russian #language-Spanish #language-Portuguese #language-French #language-Arabic #language-Italian #language-German #language-Turkish #language-Dutch #language-Korean #license-cc-by-nc-sa-4.0 #region-us \n",
"### Data Instances\n\n\nA typical data point comprises the audio signal iself, with additional labels like speaker id / session id (*video\\_id*) / language / gender etc.",
"### Data Fields\n\n\n* channel\\_id: YouTube channel ID from which speaker ID ('spk\\_id') is derived.\n* video\\_id: YouTube video ID, or session for speaker.\n* segment\\_id: ID of chunk of video's audio, that passed filtration process.\n* upload\\_date: Date time object representing the date when video was uploaded to YouTube.\n* language: Language of the channel / speaker.\n* gender: Gender of the channel / speaker.\n* spk\\_id: Infered integer speaker ID from channel\\_id.\n* spk\\_estim\\_age: Label of speaker age (not accurate) based on voice-based automatic age estimation & calibrated based on the upload\\_date of all videos for a given channel.\n* spk\\_estim\\_age\\_mae: MAE of spk\\_estim\\_age (might be considered as confidence).\n* audio: audio signal of a 4 seconds *mp3* segment from channel\\_id/video\\_id\n\n\nDataset description\n-------------------",
"### Main statistics",
"### Language and gender distributions\n\n\n!Distributions\n\n\nLanguage and gender labels of each speaker are available in original repo here.\n\n\nLicense\n-------\n\n\nThe dataset is licensed under CC BY-NC-SA 4.0, please see the complete version of the license.\n\n\nPlease also note that the provided metadata is relevant on the February 2023 and the corresponding CC BY 4.0 video licenses are valid on that date. ID R&D Inc. is not responsible for changed video license type or if the video was deleted from the YouTube platform. If you want your channel meta to be deleted from the dataset, please contact ID R&D Inc. with a topic *\"VoxTube change request\"*.\n\n\nDevelopment\n-----------\n\n\nOfficial repository live repository for opening issues.\n\n\nPlease cite the paper below if you make use of the dataset:\n\n\n''''"
] |
df66aa34ec69b0375607708660811bba9cf7cac0 |
# Dataset of kasumi/鬼怒川カスミ/霞 (Blue Archive)
This is the dataset of kasumi/鬼怒川カスミ/霞 (Blue Archive), containing 403 images and their tags.
The core tags of this character are `long_hair, horns, ahoge, halo, black_hair, bangs, hair_intakes, yellow_eyes, tail, demon_horns, very_long_hair, fang`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 403 | 627.85 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 403 | 294.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1088 | 698.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 403 | 523.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1088 | 1.06 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kasumi_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kasumi_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, black_shorts, looking_at_viewer, open_mouth, solo, white_coat, blush, collared_shirt, simple_background, sleeves_past_fingers, white_background, labcoat, long_sleeves, open_coat, smile |
| 1 | 8 |  |  |  |  |  | 1girl, blush, collared_shirt, simple_background, smile, solo, white_background, white_coat, labcoat, looking_at_viewer, open_mouth, parted_bangs, black_shorts, sleeves_past_fingers, collarbone, skirt |
| 2 | 16 |  |  |  |  |  | 1girl, barefoot, looking_at_viewer, smile, solo, toes, collared_shirt, labcoat, white_coat, sitting, black_shorts, open_mouth, soles, white_background, blush, sleeves_past_fingers, long_sleeves, foot_focus, foreshortening, simple_background, smug, parted_bangs |
| 3 | 9 |  |  |  |  |  | 1girl, blush, collared_shirt, simple_background, white_background, white_coat, open_mouth, smile, upper_body, brown_hair, long_sleeves, sleeves_past_fingers, solo, looking_at_viewer, closed_eyes |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_shorts | looking_at_viewer | open_mouth | solo | white_coat | blush | collared_shirt | simple_background | sleeves_past_fingers | white_background | labcoat | long_sleeves | open_coat | smile | parted_bangs | collarbone | skirt | barefoot | toes | sitting | soles | foot_focus | foreshortening | smug | upper_body | brown_hair | closed_eyes |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------------|:-------------|:-------|:-------------|:--------|:-----------------|:--------------------|:-----------------------|:-------------------|:----------|:---------------|:------------|:--------|:---------------|:-------------|:--------|:-----------|:-------|:----------|:--------|:-------------|:-----------------|:-------|:-------------|:-------------|:--------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | X | X | X | X | | | | | | | | | | |
| 2 | 16 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | | | X | X | X | X | X | X | X | | | |
| 3 | 9 |  |  |  |  |  | X | | X | X | X | X | X | X | X | X | X | | X | | X | | | | | | | | | | | X | X | X |
| CyberHarem/kasumi_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-11T21:15:15+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-11T22:46:18+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of kasumi/鬼怒川カスミ/霞 (Blue Archive)
=========================================
This is the dataset of kasumi/鬼怒川カスミ/霞 (Blue Archive), containing 403 images and their tags.
The core tags of this character are 'long\_hair, horns, ahoge, halo, black\_hair, bangs, hair\_intakes, yellow\_eyes, tail, demon\_horns, very\_long\_hair, fang', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
d17de9df19a5a8596c7be48c1f361364f65a0813 |
# Dataset of kikyou/桐生キキョウ/桔梗 (Blue Archive)
This is the dataset of kikyou/桐生キキョウ/桔梗 (Blue Archive), containing 418 images and their tags.
The core tags of this character are `animal_ears, black_hair, short_hair, bangs, halo, cat_ears, black_eyes, tail, cat_tail, cat_girl, animal_ear_fluff`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 418 | 772.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikyou_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 418 | 344.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikyou_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1083 | 775.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikyou_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 418 | 631.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kikyou_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1083 | 1.26 GiB | [Download](https://huggingface.co/datasets/CyberHarem/kikyou_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kikyou_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, black_choker, blue_neckerchief, long_sleeves, looking_at_viewer, pleated_skirt, serafuku, solo, white_shirt, simple_background, white_background, closed_mouth, black_skirt, blue_skirt, black_sailor_collar, blue_sailor_collar, blush, bob_cut |
| 1 | 5 |  |  |  |  |  | 1girl, black_choker, black_nails, black_skirt, blue_neckerchief, closed_mouth, long_sleeves, nail_polish, pleated_skirt, serafuku, simple_background, solo, white_shirt, japanese_clothes, looking_at_viewer, white_background, black_sailor_collar, ribbon, blue_sailor_collar, bob_cut, cowboy_shot |
| 2 | 12 |  |  |  |  |  | 1girl, blue_neckerchief, serafuku, solo, upper_body, black_choker, looking_at_viewer, neck_ribbon, black_ribbon, closed_mouth, white_shirt, black_sailor_collar, long_sleeves, simple_background, white_background, black_nails, nail_polish, blue_sailor_collar, ribbon_choker |
| 3 | 5 |  |  |  |  |  | 1girl, barefoot, black_nails, long_sleeves, serafuku, sitting, solo, toes, white_shirt, white_socks, blue_sailor_collar, looking_at_viewer, pleated_skirt, single_sock, toenail_polish, bare_legs, black_choker, blue_neckerchief, blue_skirt, closed_mouth, full_body, holding_book, no_shoes, white_background, black_skirt, foot_focus, indoors, simple_background, soles |
| 4 | 5 |  |  |  |  |  | 1girl, black_choker, collarbone, cowboy_shot, looking_at_viewer, navel, solo, stomach, blush, cleavage, closed_mouth, medium_breasts, simple_background, white_background, sweatdrop, arm_behind_back, bare_shoulders, black_nails, japanese_clothes, side-tie_bikini_bottom, standing |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | black_choker | blue_neckerchief | long_sleeves | looking_at_viewer | pleated_skirt | serafuku | solo | white_shirt | simple_background | white_background | closed_mouth | black_skirt | blue_skirt | black_sailor_collar | blue_sailor_collar | blush | bob_cut | black_nails | nail_polish | japanese_clothes | ribbon | cowboy_shot | upper_body | neck_ribbon | black_ribbon | ribbon_choker | barefoot | sitting | toes | white_socks | single_sock | toenail_polish | bare_legs | full_body | holding_book | no_shoes | foot_focus | indoors | soles | collarbone | navel | stomach | cleavage | medium_breasts | sweatdrop | arm_behind_back | bare_shoulders | side-tie_bikini_bottom | standing |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:-------------------|:---------------|:--------------------|:----------------|:-----------|:-------|:--------------|:--------------------|:-------------------|:---------------|:--------------|:-------------|:----------------------|:---------------------|:--------|:----------|:--------------|:--------------|:-------------------|:---------|:--------------|:-------------|:--------------|:---------------|:----------------|:-----------|:----------|:-------|:--------------|:--------------|:-----------------|:------------|:------------|:---------------|:-----------|:-------------|:----------|:--------|:-------------|:--------|:----------|:-----------|:-----------------|:------------|:------------------|:-----------------|:-------------------------|:-----------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 12 |  |  |  |  |  | X | X | X | X | X | | X | X | X | X | X | X | | | X | X | | | X | X | | | | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | |
| 4 | 5 |  |  |  |  |  | X | X | | | X | | | X | | X | X | X | | | | | X | | X | | X | | X | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/kikyou_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-11T21:39:07+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-11T23:22:51+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of kikyou/桐生キキョウ/桔梗 (Blue Archive)
==========================================
This is the dataset of kikyou/桐生キキョウ/桔梗 (Blue Archive), containing 418 images and their tags.
The core tags of this character are 'animal\_ears, black\_hair, short\_hair, bangs, halo, cat\_ears, black\_eyes, tail, cat\_tail, cat\_girl, animal\_ear\_fluff', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
b5ab80c621cb0d0bbcb06710daea0d420505754f | # TL;DR SFT Dataset for OpenAI's [Summarize from Feedback](https://openai.com/blog/summarization/) task
The dataset is directly taken from https://github.com/openai/summarize-from-feedback/tree/700967448d10004279f138666442bf1497d0e705#reddit-tldr-dataset
These columns are taken directly from the aforementioned dataset:
* **id**: unique identifier for the post
* **subreddit**: subreddit the post was taken from
* **title**: title of the post
* **post**: body of the post
* **summary**: summary of the post
* **reference_response**: reference response for the post
These columns are added by this preprocessing script:
* **query**: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last `
`. If it's too short it pads the main text ([summarize_from_feedback/tasks.py#L98-L165](https://github.com/openai/summarize-from-feedback/blob/700967448d10004279f138666442bf1497d0e705/summarize_from_feedback/tasks.py#L98-L165)). Padding is either space or `[PAD]` token (see Args below).
* **query_token**: tokenized version of `query`
* **reference_response_token**: tokenized version of `reference_response`
* **reference_response_token_len**: length of `reference_response_token`
* **query_reference_response**: concatenation of `query.strip()` and `reference_response`
* **query_reference_response_token**: tokenized version of `query_reference_response`, up to `max_sft_query_response_length` tokens
* **query_reference_response_token_len**: length of `query_reference_response_token`
# Args
```python
{'base_model': 'EleutherAI/pythia-1b-deduped',
'check_length_correctness': True,
'cnndm_params': TaskQueryHParams(length=1919,
format_str='Article:\n{article}\n\nTL;DR:\n',
truncate_field='article',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=None,
max_sft_query_response_length=None,
max_rm_response_length=155,
max_rm_query_response_length=2021),
'debug': False,
'hf_entity': 'cleanrl',
'push_to_hub': True,
'tldr_params': TaskQueryHParams(length=512,
format_str='SUBREDDIT: r/{subreddit}\n'
'\n'
'TITLE: {title}\n'
'\n'
'POST: {post}\n'
'\n'
'TL;DR:',
truncate_field='post',
truncate_text='\n',
padding='pad_token',
pad_token=[50277],
pad_side='left',
max_sft_response_length=53,
max_sft_query_response_length=562,
max_rm_response_length=169,
max_rm_query_response_length=638)}
```
| cleanrl/summarize_from_feedback_tldr_3_filtered_oai_preprocessing_1705009345 | [
"region:us"
] | 2024-01-11T21:44:41+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "reference_response", "dtype": "string"}, {"name": "reference_response_token", "sequence": "int64"}, {"name": "reference_response_token_len", "dtype": "int64"}, {"name": "query_reference_response", "dtype": "string"}, {"name": "query_reference_response_token", "sequence": "int64"}, {"name": "query_reference_response_token_response_label", "sequence": "int64"}, {"name": "query_reference_response_token_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2125689249, "num_examples": 116722}, {"name": "validation", "num_bytes": 117437271, "num_examples": 6447}, {"name": "test", "num_bytes": 119410966, "num_examples": 6553}], "download_size": 562087836, "dataset_size": 2362537486}} | 2024-01-11T21:45:04+00:00 | [] | [] | TAGS
#region-us
| # TL;DR SFT Dataset for OpenAI's Summarize from Feedback task
The dataset is directly taken from URL
These columns are taken directly from the aforementioned dataset:
* id: unique identifier for the post
* subreddit: subreddit the post was taken from
* title: title of the post
* post: body of the post
* summary: summary of the post
* reference_response: reference response for the post
These columns are added by this preprocessing script:
* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '
'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).
* query_token: tokenized version of 'query'
* reference_response_token: tokenized version of 'reference_response'
* reference_response_token_len: length of 'reference_response_token'
* query_reference_response: concatenation of 'URL()' and 'reference_response'
* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens
* query_reference_response_token_len: length of 'query_reference_response_token'
# Args
| [
"# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'",
"# Args"
] | [
"TAGS\n#region-us \n",
"# TL;DR SFT Dataset for OpenAI's Summarize from Feedback task\n\nThe dataset is directly taken from URL\n\nThese columns are taken directly from the aforementioned dataset:\n\n* id: unique identifier for the post\n* subreddit: subreddit the post was taken from\n* title: title of the post\n* post: body of the post\n* summary: summary of the post\n* reference_response: reference response for the post\n\nThese columns are added by this preprocessing script:\n* query: length-limited query for summarization: OAI pre-processes the main text (title + subreddit + post), ensuring it has only 512 tokens; if the main text is too long, then it tries to truncate at the last '\n'. If it's too short it pads the main text (summarize_from_feedback/URL#L98-L165). Padding is either space or '[PAD]' token (see Args below).\n* query_token: tokenized version of 'query'\n* reference_response_token: tokenized version of 'reference_response'\n* reference_response_token_len: length of 'reference_response_token'\n* query_reference_response: concatenation of 'URL()' and 'reference_response'\n* query_reference_response_token: tokenized version of 'query_reference_response', up to 'max_sft_query_response_length' tokens\n* query_reference_response_token_len: length of 'query_reference_response_token'",
"# Args"
] |
f5fcb4af8eeee61815841fba6f1c4e697d9675ba | # Dataset Card for "summarize_from_feedback_oai_preprocessing_1705009345"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cleanrl/summarize_from_feedback_oai_preprocessing_1705009345 | [
"region:us"
] | 2024-01-11T21:49:09+00:00 | {"dataset_info": {"features": [{"name": "info", "struct": [{"name": "id", "dtype": "string"}, {"name": "post", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "subreddit", "dtype": "string"}, {"name": "site", "dtype": "string"}, {"name": "article", "dtype": "string"}]}, {"name": "summaries", "list": [{"name": "text", "dtype": "string"}, {"name": "policy", "dtype": "string"}, {"name": "note", "dtype": "string"}]}, {"name": "choice", "dtype": "int32"}, {"name": "worker", "dtype": "string"}, {"name": "batch", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "extra", "struct": [{"name": "confidence", "dtype": "int32"}]}, {"name": "query_token", "sequence": "int64"}, {"name": "query", "dtype": "string"}, {"name": "response0", "dtype": "string"}, {"name": "response0_token", "sequence": "int64"}, {"name": "response0_token_len", "dtype": "int64"}, {"name": "response0_policy", "dtype": "string"}, {"name": "query_response0", "dtype": "string"}, {"name": "query_response0_token", "sequence": "int64"}, {"name": "query_response0_token_len", "dtype": "int64"}, {"name": "query_response0_token_response_label", "sequence": "int64"}, {"name": "response1", "dtype": "string"}, {"name": "response1_token", "sequence": "int64"}, {"name": "response1_token_len", "dtype": "int64"}, {"name": "response1_policy", "dtype": "string"}, {"name": "query_response1", "dtype": "string"}, {"name": "query_response1_token", "sequence": "int64"}, {"name": "query_response1_token_len", "dtype": "int64"}, {"name": "query_response1_token_response_label", "sequence": "int64"}, {"name": "query_token_len", "dtype": "int64"}, {"name": "policies", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3159944659, "num_examples": 92858}, {"name": "validation", "num_bytes": 2859307359, "num_examples": 83802}, {"name": "validation_cnndm", "num_bytes": 225356751, "num_examples": 2284}], "download_size": 290957172, "dataset_size": 6244608769}} | 2024-01-11T21:49:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "summarize_from_feedback_oai_preprocessing_1705009345"
More Information needed | [
"# Dataset Card for \"summarize_from_feedback_oai_preprocessing_1705009345\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"summarize_from_feedback_oai_preprocessing_1705009345\"\n\nMore Information needed"
] |
a0ce318fa938d846aa50c3c50ad61053d325f0d1 | # Dataset Card for "EUIPO_QE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | agil/EUIPO_QE | [
"region:us"
] | 2024-01-11T22:00:06+00:00 | {"dataset_info": {"features": [{"name": "ID_oq", "dtype": "int64"}, {"name": "text_oq", "dtype": "string"}, {"name": "text_gq", "dtype": "string"}, {"name": "equiv", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 360710.4371112647, "num_examples": 2315}, {"name": "test", "num_bytes": 90216.56288873531, "num_examples": 579}], "download_size": 159616, "dataset_size": 450927.0}} | 2024-01-11T22:00:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "EUIPO_QE"
More Information needed | [
"# Dataset Card for \"EUIPO_QE\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"EUIPO_QE\"\n\nMore Information needed"
] |
0842765c19efb912f371365664aa05cef90c7304 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | sdsadsada/si | [
"region:us"
] | 2024-01-11T22:01:23+00:00 | {} | 2024-01-11T22:19:16+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
2962344bfd03a66d585f0888afd3dfe2d68364d5 |
# Dataset Card for "bank es websites - 2401"
Dataset extracted from public websites by [wordslab-webscraper](https://github.com/wordslab-org/wordslab-webscraper) in 2401:
- domain: bank
- language: es
- license: Apache 2.0
## Dataset Sources
wordslab-webscraper follows the industry best practices for **polite web scraping**:
- clearly identifies itself as a known text indexing bot: "bingbot"
- doesn't try to hide the user IP address behind proxies
- doesn't try to circumvent bots protection solutions
- waits for a minimum delay between two pages to avoid generating too much load
- respects the website "robots.txt" indexing directives
- respects the web page Meta Robots HTML tag
- respects the web page X-Robots-Tag HTTP header
- respects the web page links rel=nofollow HTML attributes
The text data was extracted from the following websites:
| Website | HTML pages | PDF docs | Words |
|:---|:---:|:---:|:---:|
| elpais.com | 648 | 0 | 859572 |
| orangebank.es | 230 | 2 | 83540 |
| selectra.es | 1781 | 14 | 2358891 |
| www.20minutos.es | 565 | 0 | 445259 |
| www.bancamarch.es | 134 | 74 | 391999 |
| www.bancobig.es | 83 | 18 | 56040 |
| www.bancocooperativo.es | 348 | 139 | 1140507 |
| www.bancodepositos.es | 112 | 111 | 429784 |
| www.bancomediolanum.es | 186 | 281 | 1631194 |
| www.bancomundial.org | 123 | 0 | 198375 |
| www.bancosantander.es | 514 | 0 | 426308 |
| www.bankinter.com | 1289 | 126 | 1170979 |
| www.bbva.es | 796 | 174 | 1159709 |
| www.bcc.es | 429 | 188 | 2213459 |
| www.bde.es | 147 | 0 | 75821 |
| www.bnpparibas.es | 107 | 15 | 187157 |
| www.caixabank.es | 331 | 7 | 275867 |
| www.cetelem.es | 277 | 12 | 191143 |
| www.cnmv.es | 304 | 164 | 763320 |
| www.deutsche-bank.es | 424 | 155 | 1278585 |
| www.ebnbanco.com | 314 | 10 | 226346 |
| www.elperiodico.com | 1997 | 0 | 1464925 |
| www.evobanco.com | 610 | 2 | 502912 |
| www.finanzas.com | 1389 | 0 | 612969 |
| www.fundacionbancosabadell.com | 482 | 33 | 237122 |
| www.fundacionbancosantander.com | 192 | 59 | 277875 |
| www.grupbancsabadell.com | 280 | 142 | 3587581 |
| www.ibercaja.es | 368 | 8 | 247437 |
| www.lavanguardia.com | 1417 | 0 | 1138809 |
| www.openbank.es | 97 | 0 | 89034 |
| www.r4.com | 5944 | 524 | 3347625 |
| www.santander.com | 165 | 48 | 186743 |
| www.triodos.es | 385 | 92 | 919441 |
| www.unicajabanco.es | 401 | 188 | 1004625 |
## Uses
**WARNING**
- **the text included in this dataset belongs to its original authors** and is protected by copyright laws
- you are not allowed to use this dataset for anything else than **training a large language model**
- when using a large language model trained on this dataset, you will need to ensure that you comply with the law
- if you benefit from this large language model, you should try to share the value with the original text authors
wordslab-webscraper uses an advanced Html to text conversion algorithm optimized for **long context language modeling**:
- tries to recover the logical structure of the document from the Html or PDF layout
- preserves document / section / list / table grouping and nesting information
- **deduplicates text at the website level while preserving the document structure**
Each example in this dataset is a **markdown text conversion of a full HTML page or PDF document**:
- the document structure is preserved by markdown syntax: headers, lists, tables, paragraphs
- all duplicate paragraphs are removed
## Dataset Structure
The dataset is divided in 3 splits:
- train: 80% of the data
- valid: 10% of the data
- test: 10% of the data
wordslab-webscraper generates **one parquet file per website and per split**.
The parquet files are named with the following pattern:
- bank_es_2401_[split]_[website].parquet
Note than you can load individual splits or websites with HuggingFace datasets using the following commands:
```python
from datasets import load_dataset
# Load a single plit
dataset = load_dataset("namespace/bank-es-2401", split="train")
# Load a single website
data_files = { "train": "bank_es_2401_train_[website].parquet", "valid": "bank_es_2401_valid_[website].parquet", "test": "bank_es_2401_test_[website].parquet" }
dataset = load_dataset("namespace/bank-es-2401", data_files=data_files)
```
Each example in the dataset contains the text of a full web page or PDF document, with the following features:
- Uri: string
- ExtractedFromPDF: bool
- Timestamp: string
- Lang: string
- Title: string
- Text: string
- Words: int32
- AvgWordsLength: int32
- Chars: int32
- LetterChars: int32
- NumberChars: int32
- OtherChars: int32
Note that beause each example is a full page or document, the "Text" feature can be a pretty long string containing thousands of words (as measured by the "Words" feature): you will typically need to chunk it down to the context size of your large language model before using it.
## Bias, Risks, and Limitations
This dataset is a direct extraction from the source websites.
It was not manually curated to remove misleading, offensive, or harmful content.
**Please add a filtering step before using it to train a large language model** if the source websites can't be trusted.
## Dataset Card Contact
Please add a comment in the community section of this repository if you want the maintainer to add or remove websites from this dataset.
| frenchtext/bank-es-2401 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"license:apache-2.0",
"wordslab-webscraper",
"region:us"
] | 2024-01-11T22:13:51+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": "es", "license": "apache-2.0", "multilinguality": "monolingual", "size_categories": "10K<n<100K", "source_datasets": "original", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "bank es websites - 2401", "tags": ["wordslab-webscraper"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "bank_es_2401_train_*.parquet"}, {"split": "valid", "path": "bank_es_2401_valid_*.parquet"}, {"split": "test", "path": "bank_es_2401_test_*.parquet"}]}], "dataset_info": {"features": [{"name": "Uri", "dtype": "string"}, {"name": "ExtractedFromPDF", "dtype": "bool"}, {"name": "Timestamp", "dtype": "string"}, {"name": "Lang", "dtype": "string"}, {"name": "Title", "dtype": "string"}, {"name": "Text", "dtype": "string"}, {"name": "Words", "dtype": "int32"}, {"name": "AvgWordsLength", "dtype": "int32"}, {"name": "Chars", "dtype": "int32"}, {"name": "LetterChars", "dtype": "int32"}, {"name": "NumberChars", "dtype": "int32"}, {"name": "OtherChars", "dtype": "int32"}], "config_name": "default", "splits": [{"name": "train", "num_examples": 20350}, {"name": "valid", "num_examples": 2545}, {"name": "test", "num_examples": 2560}], "download_size": 110598767}} | 2024-01-11T22:16:55+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #license-apache-2.0 #wordslab-webscraper #region-us
| Dataset Card for "bank es websites - 2401"
==========================================
Dataset extracted from public websites by wordslab-webscraper in 2401:
* domain: bank
* language: es
* license: Apache 2.0
Dataset Sources
---------------
wordslab-webscraper follows the industry best practices for polite web scraping:
* clearly identifies itself as a known text indexing bot: "bingbot"
* doesn't try to hide the user IP address behind proxies
* doesn't try to circumvent bots protection solutions
* waits for a minimum delay between two pages to avoid generating too much load
* respects the website "URL" indexing directives
* respects the web page Meta Robots HTML tag
* respects the web page X-Robots-Tag HTTP header
* respects the web page links rel=nofollow HTML attributes
The text data was extracted from the following websites:
Uses
----
WARNING
* the text included in this dataset belongs to its original authors and is protected by copyright laws
* you are not allowed to use this dataset for anything else than training a large language model
* when using a large language model trained on this dataset, you will need to ensure that you comply with the law
* if you benefit from this large language model, you should try to share the value with the original text authors
wordslab-webscraper uses an advanced Html to text conversion algorithm optimized for long context language modeling:
* tries to recover the logical structure of the document from the Html or PDF layout
* preserves document / section / list / table grouping and nesting information
* deduplicates text at the website level while preserving the document structure
Each example in this dataset is a markdown text conversion of a full HTML page or PDF document:
* the document structure is preserved by markdown syntax: headers, lists, tables, paragraphs
* all duplicate paragraphs are removed
Dataset Structure
-----------------
The dataset is divided in 3 splits:
* train: 80% of the data
* valid: 10% of the data
* test: 10% of the data
wordslab-webscraper generates one parquet file per website and per split.
The parquet files are named with the following pattern:
* bank\_es\_2401\_[split]\_[website].parquet
Note than you can load individual splits or websites with HuggingFace datasets using the following commands:
Each example in the dataset contains the text of a full web page or PDF document, with the following features:
* Uri: string
* ExtractedFromPDF: bool
* Timestamp: string
* Lang: string
* Title: string
* Text: string
* Words: int32
* AvgWordsLength: int32
* Chars: int32
* LetterChars: int32
* NumberChars: int32
* OtherChars: int32
Note that beause each example is a full page or document, the "Text" feature can be a pretty long string containing thousands of words (as measured by the "Words" feature): you will typically need to chunk it down to the context size of your large language model before using it.
Bias, Risks, and Limitations
----------------------------
This dataset is a direct extraction from the source websites.
It was not manually curated to remove misleading, offensive, or harmful content.
Please add a filtering step before using it to train a large language model if the source websites can't be trusted.
Dataset Card Contact
--------------------
Please add a comment in the community section of this repository if you want the maintainer to add or remove websites from this dataset.
| [] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Spanish #license-apache-2.0 #wordslab-webscraper #region-us \n"
] |
525a4d44d0f292f6d6e581f5c82707e7230b189f |
# Natural Instructions v2 Coreference Tasks
- Project: https://github.com/allenai/natural-instructions
- Data source: [DataProvenanceInitiative/niv2_submix_original](https://huggingface.co/datasets/DataProvenanceInitiative/niv2_submix_original)
## Details
This dataset contains all coreference examples that were included in the [Flan 2022 collection](https://github.com/google-research/FLAN/tree/main/flan/v2) which were orignally published in Super-Natural-Instructions.
The data is copied from the preprocessed Natural Instructions v2 dataset at [DataProvenanceInitiative/niv2_submix_original](https://huggingface.co/datasets/DataProvenanceInitiative/niv2_submix_original).
These tasks are:
* "task1391_winogrande_coreference_resolution"
* "task1664_wino_bias_coreference_resolution"
* "task304_numeric_fused_head_coreference_resolution"
* "task892_gap_coreference_resolution"
* "task891_gap_coreference_resolution"
* "task330_gap_coreference_resolution"
* "task401_numeric_fused_head_coreference_resolution"
* "task033_winogrande_coreference_resolution"
* "task133_winowhy_coreference_resolution"
* "task329_gap_coreference_resolution"
* "task249_enhanced_wsc_coreference_resolution"
* "task648_winograd_wsc_coreference_resolution"
* "task1390_wsc_fiexed_coreference_resolution"
* "task893_gap_coreference_resolution"
### Fields
- `inputs`: a `string` feature.
- `targets`: a `string` feature.
- `task_source`: a `string` feature.
- `task_name`: a `string` feature.
- `template_type`: a `string` feature.
## Citation
```
@inproceedings{wang-etal-2022-super,
title = "Super-{N}atural{I}nstructions: Generalization via Declarative Instructions on 1600+ {NLP} Tasks",
author = "Wang, Yizhong and
Mishra, Swaroop and
Alipoormolabashi, Pegah and
Kordi, Yeganeh and
Mirzaei, Amirreza and
Naik, Atharva and
Ashok, Arjun and
Dhanasekaran, Arut Selvan and
Arunkumar, Anjana and
Stap, David and
Pathak, Eshaan and
Karamanolakis, Giannis and
Lai, Haizhi and
Purohit, Ishan and
Mondal, Ishani and
Anderson, Jacob and
Kuznia, Kirby and
Doshi, Krima and
Pal, Kuntal Kumar and
Patel, Maitreya and
Moradshahi, Mehrad and
Parmar, Mihir and
Purohit, Mirali and
Varshney, Neeraj and
Kaza, Phani Rohitha and
Verma, Pulkit and
Puri, Ravsehaj Singh and
Karia, Rushang and
Doshi, Savan and
Sampat, Shailaja Keyur and
Mishra, Siddhartha and
Reddy A, Sujan and
Patro, Sumanta and
Dixit, Tanay and
Shen, Xudong",
editor = "Goldberg, Yoav and
Kozareva, Zornitsa and
Zhang, Yue",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.emnlp-main.340",
doi = "10.18653/v1/2022.emnlp-main.340",
pages = "5085--5109",
abstract = "How well can NLP models generalize to a variety of unseen tasks when provided with task instructions? To address this question, we first introduce Super-NaturalInstructions, a benchmark of 1,616 diverse NLP tasks and their expert-written instructions. Our collection covers 76 distinct task types, including but not limited to classification, extraction, infilling, sequence tagging, text rewriting, and text composition. This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions{---}training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones. Furthermore, we build Tk-Instruct, a transformer model trained to follow a variety of in-context instructions (plain language task definitions or k-shot examples). Our experiments show that Tk-Instruct outperforms existing instruction-following models such as InstructGPT by over 9{\%} on our benchmark despite being an order of magnitude smaller. We further analyze generalization as a function of various scaling parameters, such as the number of observed tasks, the number of instances per task, and model sizes. We hope our dataset and model facilitate future progress towards more general-purpose NLP models.",
}
``` | coref-data/niv2_coreference_raw | [
"license:other",
"region:us"
] | 2024-01-11T22:31:20+00:00 | {"license": "other"} | 2024-01-19T00:03:46+00:00 | [] | [] | TAGS
#license-other #region-us
|
# Natural Instructions v2 Coreference Tasks
- Project: URL
- Data source: DataProvenanceInitiative/niv2_submix_original
## Details
This dataset contains all coreference examples that were included in the Flan 2022 collection which were orignally published in Super-Natural-Instructions.
The data is copied from the preprocessed Natural Instructions v2 dataset at DataProvenanceInitiative/niv2_submix_original.
These tasks are:
* "task1391_winogrande_coreference_resolution"
* "task1664_wino_bias_coreference_resolution"
* "task304_numeric_fused_head_coreference_resolution"
* "task892_gap_coreference_resolution"
* "task891_gap_coreference_resolution"
* "task330_gap_coreference_resolution"
* "task401_numeric_fused_head_coreference_resolution"
* "task033_winogrande_coreference_resolution"
* "task133_winowhy_coreference_resolution"
* "task329_gap_coreference_resolution"
* "task249_enhanced_wsc_coreference_resolution"
* "task648_winograd_wsc_coreference_resolution"
* "task1390_wsc_fiexed_coreference_resolution"
* "task893_gap_coreference_resolution"
### Fields
- 'inputs': a 'string' feature.
- 'targets': a 'string' feature.
- 'task_source': a 'string' feature.
- 'task_name': a 'string' feature.
- 'template_type': a 'string' feature.
| [
"# Natural Instructions v2 Coreference Tasks\n\n- Project: URL\n- Data source: DataProvenanceInitiative/niv2_submix_original",
"## Details\n\nThis dataset contains all coreference examples that were included in the Flan 2022 collection which were orignally published in Super-Natural-Instructions.\n\nThe data is copied from the preprocessed Natural Instructions v2 dataset at DataProvenanceInitiative/niv2_submix_original.\n\nThese tasks are:\n * \"task1391_winogrande_coreference_resolution\"\n * \"task1664_wino_bias_coreference_resolution\"\n * \"task304_numeric_fused_head_coreference_resolution\"\n * \"task892_gap_coreference_resolution\"\n * \"task891_gap_coreference_resolution\"\n * \"task330_gap_coreference_resolution\"\n * \"task401_numeric_fused_head_coreference_resolution\"\n * \"task033_winogrande_coreference_resolution\"\n * \"task133_winowhy_coreference_resolution\"\n * \"task329_gap_coreference_resolution\"\n * \"task249_enhanced_wsc_coreference_resolution\"\n * \"task648_winograd_wsc_coreference_resolution\"\n * \"task1390_wsc_fiexed_coreference_resolution\"\n * \"task893_gap_coreference_resolution\"",
"### Fields\n\n- 'inputs': a 'string' feature.\n- 'targets': a 'string' feature.\n- 'task_source': a 'string' feature.\n- 'task_name': a 'string' feature.\n- 'template_type': a 'string' feature."
] | [
"TAGS\n#license-other #region-us \n",
"# Natural Instructions v2 Coreference Tasks\n\n- Project: URL\n- Data source: DataProvenanceInitiative/niv2_submix_original",
"## Details\n\nThis dataset contains all coreference examples that were included in the Flan 2022 collection which were orignally published in Super-Natural-Instructions.\n\nThe data is copied from the preprocessed Natural Instructions v2 dataset at DataProvenanceInitiative/niv2_submix_original.\n\nThese tasks are:\n * \"task1391_winogrande_coreference_resolution\"\n * \"task1664_wino_bias_coreference_resolution\"\n * \"task304_numeric_fused_head_coreference_resolution\"\n * \"task892_gap_coreference_resolution\"\n * \"task891_gap_coreference_resolution\"\n * \"task330_gap_coreference_resolution\"\n * \"task401_numeric_fused_head_coreference_resolution\"\n * \"task033_winogrande_coreference_resolution\"\n * \"task133_winowhy_coreference_resolution\"\n * \"task329_gap_coreference_resolution\"\n * \"task249_enhanced_wsc_coreference_resolution\"\n * \"task648_winograd_wsc_coreference_resolution\"\n * \"task1390_wsc_fiexed_coreference_resolution\"\n * \"task893_gap_coreference_resolution\"",
"### Fields\n\n- 'inputs': a 'string' feature.\n- 'targets': a 'string' feature.\n- 'task_source': a 'string' feature.\n- 'task_name': a 'string' feature.\n- 'template_type': a 'string' feature."
] |
6990afc090d2bab0f7a4f547a8b94e67cd61419d |
# ministack-preferences-hf_dpo
this is the `mlabonne/ministack-preferences` dataset simply renamed for ease of use with trl/axolotl
- also, I took this opportunity to create a [nomic topic model](https://atlas.nomic.ai/map/6e119454-0ac1-4989-a339-de9e61747dc8/7d565286-ae4a-4219-baa0-2eaad94d41bc?xs=-28.44755&xf=33.99401&ys=-17.79427&yf=20.88903) with embeddings/etc created on the `prompt` column, it's fun to play with
| pszemraj/ministack-preferences-hf_dpo | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"source_datasets:mlabonne/ministack-preferences",
"license:odc-by",
"region:us"
] | 2024-01-11T22:52:41+00:00 | {"license": "odc-by", "size_categories": ["1K<n<10K"], "source_datasets": "mlabonne/ministack-preferences", "task_categories": ["text-generation"], "dataset_info": [{"config_name": "default", "features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2560796, "num_examples": 1000}, {"name": "test", "num_bytes": 2758016, "num_examples": 1000}], "download_size": 3058502, "dataset_size": 5318812}, {"config_name": "pair-STS", "features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}, {"name": "cosine_similarity", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 2568796, "num_examples": 1000}, {"name": "test", "num_bytes": 2766016, "num_examples": 1000}], "download_size": 3076677, "dataset_size": 5334812}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}, {"config_name": "pair-STS", "data_files": [{"split": "train", "path": "pair-STS/train-*"}, {"split": "test", "path": "pair-STS/test-*"}]}]} | 2024-01-12T01:53:05+00:00 | [] | [] | TAGS
#task_categories-text-generation #size_categories-1K<n<10K #source_datasets-mlabonne/ministack-preferences #license-odc-by #region-us
|
# ministack-preferences-hf_dpo
this is the 'mlabonne/ministack-preferences' dataset simply renamed for ease of use with trl/axolotl
- also, I took this opportunity to create a nomic topic model with embeddings/etc created on the 'prompt' column, it's fun to play with
| [
"# ministack-preferences-hf_dpo\n\n\nthis is the 'mlabonne/ministack-preferences' dataset simply renamed for ease of use with trl/axolotl\n\n\n\n- also, I took this opportunity to create a nomic topic model with embeddings/etc created on the 'prompt' column, it's fun to play with"
] | [
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #source_datasets-mlabonne/ministack-preferences #license-odc-by #region-us \n",
"# ministack-preferences-hf_dpo\n\n\nthis is the 'mlabonne/ministack-preferences' dataset simply renamed for ease of use with trl/axolotl\n\n\n\n- also, I took this opportunity to create a nomic topic model with embeddings/etc created on the 'prompt' column, it's fun to play with"
] |
838161f064c5c65f3830411c7563b17cafd61250 |
# Sentiment Analysis (Portuguese) dataset
This dataset is a concatenation of the [`IMDB 50K`](https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews?select=IMDB+Dataset.csv), the [`Twitter US Airline Sentiment`](https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment), App Reviews scraped from [Google Play](https://github.com/Nkluge-correa/teeny-tiny_castle/blob/master/ML%20Explainability/NLP%20Interpreter/text_scraping.ipynb), and the [`EcoPreprocessed`](https://www.kaggle.com/datasets/pradeeshprabhakar/preprocessed-dataset-sentiment-analysis).
- **Note:** Samples were translated via the Google Translate API.
| AiresPucrs/sentiment-analysis-pt | [
"license:apache-2.0",
"region:us"
] | 2024-01-11T22:55:14+00:00 | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 71907240, "num_examples": 85027}], "download_size": 43818509, "dataset_size": 71907240}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-03T16:18:26+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
# Sentiment Analysis (Portuguese) dataset
This dataset is a concatenation of the 'IMDB 50K', the 'Twitter US Airline Sentiment', App Reviews scraped from Google Play, and the 'EcoPreprocessed'.
- Note: Samples were translated via the Google Translate API.
| [
"# Sentiment Analysis (Portuguese) dataset\n\nThis dataset is a concatenation of the 'IMDB 50K', the 'Twitter US Airline Sentiment', App Reviews scraped from Google Play, and the 'EcoPreprocessed'.\n\n- Note: Samples were translated via the Google Translate API."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Sentiment Analysis (Portuguese) dataset\n\nThis dataset is a concatenation of the 'IMDB 50K', the 'Twitter US Airline Sentiment', App Reviews scraped from Google Play, and the 'EcoPreprocessed'.\n\n- Note: Samples were translated via the Google Translate API."
] |
ab7bc2cf3e10a8b26f444c9a54f6a8cd0e2257bf |
License: from my side, you can do whatever you want. Though parts of this data was generated via OpenAI's chatGPT (using GPT-4 and GPT-3.5 Instruct) as well as GPT-3.5 via their API.
Their terms prohibit the development of competing models. I did not bother to read the terms further. Use at your own risk. Have fun :) | maxmyn/wholesome_greentext_110k | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:en",
"license:other",
"casual",
"internet-culture",
"region:us"
] | 2024-01-11T23:42:40+00:00 | {"language": ["en"], "license": "other", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "Short Wholesome 4chan-style Greentext ", "dataset_info": {"features": [{"name": "greentexts", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 15193164, "num_examples": 111320}], "download_size": 9449169, "dataset_size": 15193164}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["casual", "internet-culture"]} | 2024-01-14T14:35:36+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-other #casual #internet-culture #region-us
|
License: from my side, you can do whatever you want. Though parts of this data was generated via OpenAI's chatGPT (using GPT-4 and GPT-3.5 Instruct) as well as GPT-3.5 via their API.
Their terms prohibit the development of competing models. I did not bother to read the terms further. Use at your own risk. Have fun :) | [] | [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-English #license-other #casual #internet-culture #region-us \n"
] |
6d253cab8192de61ed65ab619dd0002ae785d818 |
# Dataset Card for Evaluation run of cloudyu/Mixtral_7Bx6_MoE_35B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [cloudyu/Mixtral_7Bx6_MoE_35B](https://huggingface.co/cloudyu/Mixtral_7Bx6_MoE_35B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_cloudyu__Mixtral_7Bx6_MoE_35B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T16:00:09.048254](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__Mixtral_7Bx6_MoE_35B/blob/main/results_2024-01-14T16-00-09.048254.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6538307305259115,
"acc_stderr": 0.03206532838135927,
"acc_norm": 0.6536540314559122,
"acc_norm_stderr": 0.03272839976259325,
"mc1": 0.5055079559363526,
"mc1_stderr": 0.01750243899045107,
"mc2": 0.6576763693172452,
"mc2_stderr": 0.01500859930650817
},
"harness|arc:challenge|25": {
"acc": 0.674061433447099,
"acc_stderr": 0.013697432466693246,
"acc_norm": 0.6996587030716723,
"acc_norm_stderr": 0.013395909309957005
},
"harness|hellaswag|10": {
"acc": 0.6878111929894444,
"acc_stderr": 0.0046243936909669,
"acc_norm": 0.8681537542322246,
"acc_norm_stderr": 0.0033763209559167064
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.02783491252754406,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.02783491252754406
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.57,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287533,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287533
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370333,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370333
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4470899470899471,
"acc_stderr": 0.025606723995777025,
"acc_norm": 0.4470899470899471,
"acc_norm_stderr": 0.025606723995777025
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7774193548387097,
"acc_stderr": 0.023664216671642518,
"acc_norm": 0.7774193548387097,
"acc_norm_stderr": 0.023664216671642518
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4876847290640394,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.4876847290640394,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328973,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328973
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.023991500500313036,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.023991500500313036
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524565,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524565
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.030176808288974337,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.030176808288974337
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8513761467889909,
"acc_stderr": 0.015251253773660831,
"acc_norm": 0.8513761467889909,
"acc_norm_stderr": 0.015251253773660831
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639318,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8016877637130801,
"acc_stderr": 0.02595502084162113,
"acc_norm": 0.8016877637130801,
"acc_norm_stderr": 0.02595502084162113
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057221,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057221
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8974358974358975,
"acc_stderr": 0.01987565502786744,
"acc_norm": 0.8974358974358975,
"acc_norm_stderr": 0.01987565502786744
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8301404853128991,
"acc_stderr": 0.013428186370608306,
"acc_norm": 0.8301404853128991,
"acc_norm_stderr": 0.013428186370608306
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7485549132947977,
"acc_stderr": 0.02335736578587403,
"acc_norm": 0.7485549132947977,
"acc_norm_stderr": 0.02335736578587403
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.43687150837988825,
"acc_stderr": 0.016588680864530626,
"acc_norm": 0.43687150837988825,
"acc_norm_stderr": 0.016588680864530626
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.02526169121972948,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.02526169121972948
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7009646302250804,
"acc_stderr": 0.02600330111788514,
"acc_norm": 0.7009646302250804,
"acc_norm_stderr": 0.02600330111788514
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.02438366553103545,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.02438366553103545
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.029736592526424438,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.029736592526424438
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4589308996088657,
"acc_stderr": 0.012727084826799798,
"acc_norm": 0.4589308996088657,
"acc_norm_stderr": 0.012727084826799798
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462923,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462923
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6715686274509803,
"acc_stderr": 0.018999707383162673,
"acc_norm": 0.6715686274509803,
"acc_norm_stderr": 0.018999707383162673
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7673469387755102,
"acc_stderr": 0.02704925791589618,
"acc_norm": 0.7673469387755102,
"acc_norm_stderr": 0.02704925791589618
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8656716417910447,
"acc_stderr": 0.02411267824090083,
"acc_norm": 0.8656716417910447,
"acc_norm_stderr": 0.02411267824090083
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-virology|5": {
"acc": 0.536144578313253,
"acc_stderr": 0.03882310850890594,
"acc_norm": 0.536144578313253,
"acc_norm_stderr": 0.03882310850890594
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.02917088550072767,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.02917088550072767
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5055079559363526,
"mc1_stderr": 0.01750243899045107,
"mc2": 0.6576763693172452,
"mc2_stderr": 0.01500859930650817
},
"harness|winogrande|5": {
"acc": 0.8113654301499605,
"acc_stderr": 0.010995172318019813
},
"harness|gsm8k|5": {
"acc": 0.7126611068991661,
"acc_stderr": 0.012464677060107081
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_cloudyu__Mixtral_7Bx6_MoE_35B | [
"region:us"
] | 2024-01-12T00:23:07+00:00 | {"pretty_name": "Evaluation run of cloudyu/Mixtral_7Bx6_MoE_35B", "dataset_summary": "Dataset automatically created during the evaluation run of model [cloudyu/Mixtral_7Bx6_MoE_35B](https://huggingface.co/cloudyu/Mixtral_7Bx6_MoE_35B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_cloudyu__Mixtral_7Bx6_MoE_35B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T16:00:09.048254](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__Mixtral_7Bx6_MoE_35B/blob/main/results_2024-01-14T16-00-09.048254.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6538307305259115,\n \"acc_stderr\": 0.03206532838135927,\n \"acc_norm\": 0.6536540314559122,\n \"acc_norm_stderr\": 0.03272839976259325,\n \"mc1\": 0.5055079559363526,\n \"mc1_stderr\": 0.01750243899045107,\n \"mc2\": 0.6576763693172452,\n \"mc2_stderr\": 0.01500859930650817\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.674061433447099,\n \"acc_stderr\": 0.013697432466693246,\n \"acc_norm\": 0.6996587030716723,\n \"acc_norm_stderr\": 0.013395909309957005\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6878111929894444,\n \"acc_stderr\": 0.0046243936909669,\n \"acc_norm\": 0.8681537542322246,\n \"acc_norm_stderr\": 0.0033763209559167064\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.02783491252754406,\n \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.02783491252754406\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287533,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287533\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4470899470899471,\n \"acc_stderr\": 0.025606723995777025,\n \"acc_norm\": 0.4470899470899471,\n \"acc_norm_stderr\": 0.025606723995777025\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n \"acc_stderr\": 0.023664216671642518,\n \"acc_norm\": 0.7774193548387097,\n \"acc_norm_stderr\": 0.023664216671642518\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328973,\n \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328973\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.023991500500313036,\n \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.023991500500313036\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524565,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524565\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8513761467889909,\n \"acc_stderr\": 0.015251253773660831,\n \"acc_norm\": 0.8513761467889909,\n \"acc_norm_stderr\": 0.015251253773660831\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639318,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639318\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8016877637130801,\n \"acc_stderr\": 0.02595502084162113,\n \"acc_norm\": 0.8016877637130801,\n \"acc_norm_stderr\": 0.02595502084162113\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057221,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057221\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8974358974358975,\n \"acc_stderr\": 0.01987565502786744,\n \"acc_norm\": 0.8974358974358975,\n \"acc_norm_stderr\": 0.01987565502786744\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8301404853128991,\n \"acc_stderr\": 0.013428186370608306,\n \"acc_norm\": 0.8301404853128991,\n \"acc_norm_stderr\": 0.013428186370608306\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7485549132947977,\n \"acc_stderr\": 0.02335736578587403,\n \"acc_norm\": 0.7485549132947977,\n \"acc_norm_stderr\": 0.02335736578587403\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.43687150837988825,\n \"acc_stderr\": 0.016588680864530626,\n \"acc_norm\": 0.43687150837988825,\n \"acc_norm_stderr\": 0.016588680864530626\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n \"acc_stderr\": 0.02600330111788514,\n \"acc_norm\": 0.7009646302250804,\n \"acc_norm_stderr\": 0.02600330111788514\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.02438366553103545,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.02438366553103545\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46099290780141844,\n \"acc_stderr\": 0.029736592526424438,\n \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.029736592526424438\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4589308996088657,\n \"acc_stderr\": 0.012727084826799798,\n \"acc_norm\": 0.4589308996088657,\n \"acc_norm_stderr\": 0.012727084826799798\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462923,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462923\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162673,\n \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162673\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7673469387755102,\n \"acc_stderr\": 0.02704925791589618,\n \"acc_norm\": 0.7673469387755102,\n \"acc_norm_stderr\": 0.02704925791589618\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n \"acc_stderr\": 0.02411267824090083,\n \"acc_norm\": 0.8656716417910447,\n \"acc_norm_stderr\": 0.02411267824090083\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n \"acc_stderr\": 0.03882310850890594,\n \"acc_norm\": 0.536144578313253,\n \"acc_norm_stderr\": 0.03882310850890594\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5055079559363526,\n \"mc1_stderr\": 0.01750243899045107,\n \"mc2\": 0.6576763693172452,\n \"mc2_stderr\": 0.01500859930650817\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8113654301499605,\n \"acc_stderr\": 0.010995172318019813\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7126611068991661,\n \"acc_stderr\": 0.012464677060107081\n }\n}\n```", "repo_url": "https://huggingface.co/cloudyu/Mixtral_7Bx6_MoE_35B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|arc:challenge|25_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|arc:challenge|25_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|gsm8k|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|gsm8k|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hellaswag|10_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hellaswag|10_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-12T00-20-46.590520.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T16-00-09.048254.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["**/details_harness|winogrande|5_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["**/details_harness|winogrande|5_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T16-00-09.048254.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_12T00_20_46.590520", "path": ["results_2024-01-12T00-20-46.590520.parquet"]}, {"split": "2024_01_14T16_00_09.048254", "path": ["results_2024-01-14T16-00-09.048254.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T16-00-09.048254.parquet"]}]}]} | 2024-01-14T16:02:51+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of cloudyu/Mixtral_7Bx6_MoE_35B
Dataset automatically created during the evaluation run of model cloudyu/Mixtral_7Bx6_MoE_35B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T16:00:09.048254(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of cloudyu/Mixtral_7Bx6_MoE_35B\n\n\n\nDataset automatically created during the evaluation run of model cloudyu/Mixtral_7Bx6_MoE_35B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T16:00:09.048254(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of cloudyu/Mixtral_7Bx6_MoE_35B\n\n\n\nDataset automatically created during the evaluation run of model cloudyu/Mixtral_7Bx6_MoE_35B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T16:00:09.048254(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
d243889a3eb6654029dbd7e7f9319ae31d58f97c |
# Ko-StrategyQA
This dataset represents a conversion of the [Ko-StrategyQA dataset](https://huggingface.co/datasets/NomaDamas/Ko-StrategyQA) into the [BeIR](https://github.com/beir-cellar/beir) format, making it compatible for use with [mteb](https://github.com/embeddings-benchmark/mteb).
The original dataset was designed for multi-hop QA, so we processed the data accordingly. First, we grouped the evidence documents tagged by annotators into sets, and excluded unit questions containing 'no_evidence' or 'operation'. | taeminlee/Ko-StrategyQA | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:Ko-StrategyQA",
"language:ko",
"text-retrieval",
"region:us"
] | 2024-01-12T01:58:26+00:00 | {"language": ["ko"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["Ko-StrategyQA"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "config_names": ["corpus"], "tags": ["text-retrieval"], "dataset_info": [{"config_name": "default", "features": [{"name": "query-id", "dtype": "string"}, {"name": "corpus-id", "dtype": "string"}, {"name": "score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 236940, "num_examples": 4377}, {"name": "dev", "num_bytes": 61724, "num_examples": 1145}]}, {"config_name": "corpus", "features": [{"name": "_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "corpus", "num_bytes": 7021046, "num_examples": 9251}]}, {"config_name": "queries", "features": [{"name": "_id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "queries", "num_bytes": 244634, "num_examples": 2833}]}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "qrels/train.jsonl"}, {"split": "dev", "path": "qrels/dev.jsonl"}]}, {"config_name": "corpus", "data_files": [{"split": "corpus", "path": "corpus.jsonl"}]}, {"config_name": "queries", "data_files": [{"split": "queries", "path": "queries.jsonl"}]}]} | 2024-01-19T08:48:28+00:00 | [] | [
"ko"
] | TAGS
#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-Ko-StrategyQA #language-Korean #text-retrieval #region-us
|
# Ko-StrategyQA
This dataset represents a conversion of the Ko-StrategyQA dataset into the BeIR format, making it compatible for use with mteb.
The original dataset was designed for multi-hop QA, so we processed the data accordingly. First, we grouped the evidence documents tagged by annotators into sets, and excluded unit questions containing 'no_evidence' or 'operation'. | [
"# Ko-StrategyQA\n\nThis dataset represents a conversion of the Ko-StrategyQA dataset into the BeIR format, making it compatible for use with mteb.\n\nThe original dataset was designed for multi-hop QA, so we processed the data accordingly. First, we grouped the evidence documents tagged by annotators into sets, and excluded unit questions containing 'no_evidence' or 'operation'."
] | [
"TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-Ko-StrategyQA #language-Korean #text-retrieval #region-us \n",
"# Ko-StrategyQA\n\nThis dataset represents a conversion of the Ko-StrategyQA dataset into the BeIR format, making it compatible for use with mteb.\n\nThe original dataset was designed for multi-hop QA, so we processed the data accordingly. First, we grouped the evidence documents tagged by annotators into sets, and excluded unit questions containing 'no_evidence' or 'operation'."
] |
24621c07ae2c5980be0e697ee92f2d3b417dd4b3 |
This dataset contains:
- 40351 images (71.39%) in train dataset
- 6378 images (14.29%) in validation dataset
- 6391 images (14.32%) in test dataset
- Total: 53120 images
- sourced from the extensive Synthetic Word Dataset, a large-scale word-image dataset.
The original and complete dataset (9 million images, 10.68GB) can be found and downloaded at [this academic torrent](https://academictorrents.com/details/3d0b4f09080703d2a9c6be50715b46389fdb3af1). | VictorG-028/Small_subset_of_Synthetic_Word_Dataset | [
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"language:en",
"license:unknown",
"code",
"region:us"
] | 2024-01-12T02:22:56+00:00 | {"language": ["en"], "license": "unknown", "size_categories": ["10K<n<100K"], "task_categories": ["image-to-text"], "pretty_name": "mjsynth", "tags": ["code"]} | 2024-01-21T08:45:31+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-to-text #size_categories-10K<n<100K #language-English #license-unknown #code #region-us
|
This dataset contains:
- 40351 images (71.39%) in train dataset
- 6378 images (14.29%) in validation dataset
- 6391 images (14.32%) in test dataset
- Total: 53120 images
- sourced from the extensive Synthetic Word Dataset, a large-scale word-image dataset.
The original and complete dataset (9 million images, 10.68GB) can be found and downloaded at this academic torrent. | [] | [
"TAGS\n#task_categories-image-to-text #size_categories-10K<n<100K #language-English #license-unknown #code #region-us \n"
] |
848f5161f627e6360dd1d5895aace954387bf6ff | # StageVision V1 OD > 2024-01-15 12:07am
https://universe.roboflow.com/scaoss/stagevision-v1-od
Provided by a Roboflow user
License: CC BY 4.0
StageVision V1 OD - v1 2024-01-15 12:07am
==============================
This dataset was exported via roboflow.com on January 14, 2024 at 11:10 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 24000 images.
Stage_Items are annotated in YOLOv8 format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
The following augmentation was applied to create 3 versions of each source image:
* Random Gaussian blur of between 0 and 1.4 pixels
* Salt and pepper noise was applied to 1.96 percent of pixels | SergioSCA/StageVision_v1 | [
"task_categories:object-detection",
"size_categories:10K<n<100K",
"license:apache-2.0",
"show_bussiness",
"region:us"
] | 2024-01-12T02:25:23+00:00 | {"license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["object-detection"], "tags": ["show_bussiness"]} | 2024-01-16T02:59:25+00:00 | [] | [] | TAGS
#task_categories-object-detection #size_categories-10K<n<100K #license-apache-2.0 #show_bussiness #region-us
| # StageVision V1 OD > 2024-01-15 12:07am
URL
Provided by a Roboflow user
License: CC BY 4.0
StageVision V1 OD - v1 2024-01-15 12:07am
==============================
This dataset was exported via URL on January 14, 2024 at 11:10 PM GMT
Roboflow is an end-to-end computer vision platform that helps you
* collaborate with your team on computer vision projects
* collect & organize images
* understand and search unstructured image data
* annotate, and create datasets
* export, train, and deploy computer vision models
* use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset,
visit URL
To find over 100k other datasets and pre-trained models, visit URL
The dataset includes 24000 images.
Stage_Items are annotated in YOLOv8 format.
The following pre-processing was applied to each image:
* Auto-orientation of pixel data (with EXIF-orientation stripping)
* Resize to 640x640 (Stretch)
The following augmentation was applied to create 3 versions of each source image:
* Random Gaussian blur of between 0 and 1.4 pixels
* Salt and pepper noise was applied to 1.96 percent of pixels | [
"# StageVision V1 OD > 2024-01-15 12:07am\nURL\n\nProvided by a Roboflow user\nLicense: CC BY 4.0\n\nStageVision V1 OD - v1 2024-01-15 12:07am\n==============================\n\nThis dataset was exported via URL on January 14, 2024 at 11:10 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 24000 images.\nStage_Items are annotated in YOLOv8 format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nThe following augmentation was applied to create 3 versions of each source image:\n* Random Gaussian blur of between 0 and 1.4 pixels\n* Salt and pepper noise was applied to 1.96 percent of pixels"
] | [
"TAGS\n#task_categories-object-detection #size_categories-10K<n<100K #license-apache-2.0 #show_bussiness #region-us \n",
"# StageVision V1 OD > 2024-01-15 12:07am\nURL\n\nProvided by a Roboflow user\nLicense: CC BY 4.0\n\nStageVision V1 OD - v1 2024-01-15 12:07am\n==============================\n\nThis dataset was exported via URL on January 14, 2024 at 11:10 PM GMT\n\nRoboflow is an end-to-end computer vision platform that helps you\n* collaborate with your team on computer vision projects\n* collect & organize images\n* understand and search unstructured image data\n* annotate, and create datasets\n* export, train, and deploy computer vision models\n* use active learning to improve your dataset over time\n\nFor state of the art Computer Vision training notebooks you can use with this dataset,\nvisit URL\n\nTo find over 100k other datasets and pre-trained models, visit URL\n\nThe dataset includes 24000 images.\nStage_Items are annotated in YOLOv8 format.\n\nThe following pre-processing was applied to each image:\n* Auto-orientation of pixel data (with EXIF-orientation stripping)\n* Resize to 640x640 (Stretch)\n\nThe following augmentation was applied to create 3 versions of each source image:\n* Random Gaussian blur of between 0 and 1.4 pixels\n* Salt and pepper noise was applied to 1.96 percent of pixels"
] |
90b71633bc7478dc601a83b072e6abb27148b6f9 | # Dataset Card for "en-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kpriyanshu256/en-test | [
"region:us"
] | 2024-01-12T03:12:04+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "meta_data", "struct": [{"name": "data_category", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "pile_set_name", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "perspective", "struct": [{"name": "attributeScores", "struct": [{"name": "IDENTITY_ATTACK", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "INSULT", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "PROFANITY", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "SEVERE_TOXICITY", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "THREAT", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "TOXICITY", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}]}, {"name": "detectedLanguages", "sequence": "string"}, {"name": "languages", "sequence": "string"}]}, {"name": "toxicity", "dtype": "float64"}, {"name": "toxicity_bucket", "dtype": "float64"}, {"name": "prompt", "dtype": "string"}, {"name": "continuation", "dtype": "string"}, {"name": "prompt_perspective", "struct": [{"name": "attributeScores", "struct": [{"name": "IDENTITY_ATTACK", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "INSULT", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "PROFANITY", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "SEVERE_TOXICITY", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "THREAT", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "TOXICITY", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}]}, {"name": "detectedLanguages", "sequence": "string"}, {"name": "languages", "sequence": "string"}]}, {"name": "continuation_perspective", "struct": [{"name": "attributeScores", "struct": [{"name": "IDENTITY_ATTACK", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "INSULT", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "PROFANITY", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "SEVERE_TOXICITY", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "THREAT", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "TOXICITY", "struct": [{"name": "spanScores", "list": [{"name": "begin", "dtype": "int64"}, {"name": "end", "dtype": "int64"}, {"name": "score", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "summaryScore", "struct": [{"name": "type", "dtype": "string"}, {"name": "value", "dtype": "float64"}]}]}]}, {"name": "detectedLanguages", "sequence": "string"}, {"name": "languages", "sequence": "string"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 145176570, "num_examples": 25000}], "download_size": 77122993, "dataset_size": 145176570}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T03:12:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "en-test"
More Information needed | [
"# Dataset Card for \"en-test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"en-test\"\n\nMore Information needed"
] |
f7080f9d33c965a755e4269f499c5a00b4e630a2 | # Mixtral Malaysian RAG
Simulate Multi-turn RAG conversations, some questions purposely asked out-of-context and the users throw random rubbish replies.
notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/chatbot/mixtral-malaysian-rag
- [rag-multiturn-chaotic.jsonl](rag-multiturn-chaotic.jsonl), 327980 rows, 4.1 GB
- [rag-multiturn-chaotic-part2.jsonl](rag-multiturn-chaotic-part2.jsonl), 169403 rows, 2.45 GB
- [mixtral-rag-question-disagree.jsonl](mixtral-rag-question-disagree.jsonl), 339627 rows, 3.08 GB
## Example data
```python
{'filename': 'gov.my.jsonl',
'chat': [{'role': 'user',
'content': "Are you as sure as a horse's hind leg?",
'content_ms': 'Adakah anda pasti seperti kaki belakang kuda?'},
{'role': 'assistant',
'content': "I'm afraid I don't understand the reference to a horse's hind leg in relation to my ability to provide accurate information. However, I can assure you that I will provide information based solely on the knowledge base you've given me and to the best of my ability, I will provide accurate and relevant responses. I don't have feelings, beliefs, or personal knowledge, I am a machine learning model that generate text based on the input data.",
'content_ms': 'Saya takut saya tidak memahami rujukan kepada kaki belakang kuda berkaitan dengan keupayaan saya untuk memberikan maklumat yang tepat. Walau bagaimanapun, saya boleh memberi jaminan kepada anda bahawa saya akan memberikan maklumat hanya berdasarkan pangkalan pengetahuan yang anda berikan kepada saya dan setakat kemampuan saya, saya akan memberikan respons yang tepat dan relevan. Saya tidak mempunyai perasaan, kepercayaan atau pengetahuan peribadi, saya ialah model pembelajaran mesin yang menjana teks berdasarkan data input.'},
{'role': 'user',
'content': 'When does the order come into force?',
'content_ms': 'Bilakah perintah itu mula berkuat kuasa?'},
{'role': 'assistant',
'content': 'Based on the knowledge base provided, the Environmental Quality (Prescribed Premises) (Scheduled Wastes Treatment and Disposal Facilities) Order 1989 comes into force on the 1st of May, 1989.',
'content_ms': 'Berdasarkan pangkalan pengetahuan yang disediakan, Perintah Kualiti Alam Sekeliling (Premis Ditetapkan) (Kemudahan Rawatan dan Pelupusan Sisa Terjadual) 1989 berkuat kuasa pada 1 Mei 1989.'},
{'role': 'user',
'content': 'How does the order impact businesses and individuals in Malaysia?',
'content_ms': 'Bagaimanakah pesanan itu memberi kesan kepada perniagaan dan individu di Malaysia?'},
{'role': 'assistant',
'content': 'Based on the knowledge base provided, the Environmental Quality (Prescribed Premises) (Scheduled Wastes Treatment and Disposal Facilities) Order 1989, which comes into force on the 1st of May, 1989 in Malaysia, is a subsidiary legislation under the Environmental Quality Act 1974 [Act 127]. This order sets forth the regulations for prescribed premises and scheduled wastes treatment and disposal facilities in Malaysia. The aim of this order is to control and manage the generation, treatment, storage, and disposal of scheduled wastes in a manner that protects the environment and human health.\n\nTherefore, businesses and individuals in Malaysia that generate, handle, or dispose of scheduled wastes, as defined in the order, will be impacted by these regulations. They will be required to comply with the standards and requirements set forth in the order, such as proper treatment, storage, and disposal of scheduled wastes, and obtaining the necessary permits and approvals from the relevant authorities. Non-compliance with the regulations can result in penalties, such as fines and/or imprisonment. It is important for businesses and individuals in Malaysia to be aware of and understand the requirements of this order, and to take appropriate measures to ensure compliance.',
'content_ms': 'Berdasarkan pangkalan pengetahuan yang disediakan, Perintah Kualiti Alam Sekeliling (Premis Ditetapkan) (Kemudahan Rawatan dan Pelupusan Sisa Terjadual) 1989, yang berkuat kuasa pada 1 Mei, 1989 di Malaysia, adalah perundangan subsidiari di bawah Akta Kualiti Alam Sekitar 1974 [ Akta 127]. Perintah ini menetapkan peraturan untuk premis yang ditetapkan dan kemudahan rawatan dan pelupusan sisa terjadual di Malaysia. Matlamat perintah ini adalah untuk mengawal dan mengurus penjanaan, rawatan, penyimpanan dan pelupusan sisa terjadual dengan cara yang melindungi alam sekitar dan kesihatan manusia.\n\nOleh itu, perniagaan dan individu di Malaysia yang menjana, mengendalikan atau membuang sisa terjadual, seperti yang dinyatakan dalam perintah itu, akan terjejas oleh peraturan ini. Mereka akan dikehendaki mematuhi piawaian dan keperluan yang ditetapkan dalam perintah itu, seperti rawatan yang betul, penyimpanan, dan pelupusan sisa terjadual, dan mendapatkan permit dan kelulusan yang diperlukan daripada pihak berkuasa yang berkaitan. Ketidakpatuhan terhadap peraturan boleh mengakibatkan penalti, seperti denda dan/atau penjara. Adalah penting bagi perniagaan dan individu di Malaysia untuk menyedari dan memahami keperluan perintah ini, dan mengambil langkah yang sesuai untuk memastikan pematuhan.'},
{'role': 'user',
'content': 'What are the potential consequences for non-compliance with the order?',
'content_ms': 'Apakah akibat yang berpotensi untuk ketidakpatuhan terhadap perintah itu?'},
{'role': 'assistant',
'content': "Based on the knowledge base provided, the Environmental Quality (Prescribed Premises) (Scheduled Wastes Treatment and Disposal Facilities) Order 1989 is a subsidiary legislation under the Environmental Quality Act 1974 in Malaysia. The order sets forth the regulations for prescribed premises and scheduled wastes treatment and disposal facilities in Malaysia.\n\nNon-compliance with the regulations set forth in the order can result in penalties, such as fines and/or imprisonment. The exact penalties for non-compliance are not specified in the order, but are likely defined in the Environmental Quality Act 1974.\n\nIt's important to note that I'm a language model, and my knowledge cut off is 2021, I don't have real-time access to any laws, regulations or any other legal information. It is always recommended to consult the legal text or consult with a legal professional to get the most accurate and up-to-date information about the potential consequences of non-compliance with any specific regulation.",
'content_ms': 'Berdasarkan pangkalan pengetahuan yang disediakan, Perintah Kualiti Alam Sekeliling (Premis Ditetapkan) (Kemudahan Rawatan dan Pelupusan Sisa Terjadual) 1989 adalah perundangan subsidiari di bawah Akta Kualiti Alam Sekeliling 1974 di Malaysia. Perintah itu menetapkan peraturan untuk premis yang ditetapkan dan kemudahan rawatan dan pelupusan sisa terjadual di Malaysia.\n\nKetidakpatuhan terhadap peraturan yang ditetapkan dalam perintah itu boleh mengakibatkan penalti, seperti denda dan/atau penjara. Penalti yang tepat bagi ketidakpatuhan tidak dinyatakan dalam perintah itu, tetapi mungkin ditakrifkan dalam Akta Kualiti Alam Sekeliling 1974.\n\nAdalah penting untuk ambil perhatian bahawa saya adalah model bahasa, dan pengetahuan saya terputus ialah 2021, saya tidak mempunyai akses masa nyata kepada mana-mana undang-undang, peraturan atau sebarang maklumat undang-undang lain. Ia sentiasa disyorkan untuk merujuk teks undang-undang atau berunding dengan profesional undang-undang untuk mendapatkan maklumat yang paling tepat dan terkini tentang kemungkinan akibat ketidakpatuhan terhadap mana-mana peraturan tertentu.'},
{'role': 'user',
'content': "How does the order contribute to Malaysia's overall environmental quality and sustainability efforts?",
'content_ms': 'Bagaimanakah perintah itu menyumbang kepada usaha kualiti alam sekitar dan kemampanan keseluruhan Malaysia?'},
{'role': 'assistant',
'content': "Based on the provided knowledge base, the Environmental Quality (Prescribed Premises) (Scheduled Wastes Treatment and Disposal Facilities) Order 1989 is a subsidiary legislation under the Environmental Quality Act 1974 in Malaysia. The order sets forth the regulations for prescribed premises and scheduled wastes treatment and disposal facilities in Malaysia, which aim to control and manage the generation, treatment, storage, and disposal of scheduled wastes in a manner that protects the environment and human health. By establishing standards and requirements for the proper handling and disposal of scheduled wastes, the order helps to prevent pollution, protect the environment, and promote sustainability. This order is one of the regulations put in place by the Malaysian government to control and manage the waste generated by different industries and businesses, in order to achieve a good environmental quality and a sustainable development.\n\nAdditionally, The Environmental Quality Act 1974 is a primary legislation that provides the legal framework for environmental protection, conservation and management in Malaysia, which aim to protect and improve the quality of the environment, and to provide for the prevention, abatement and control of pollution and environmental degradation. The order contributes to Malaysia's overall environmental quality and sustainability efforts by setting specific regulations that must be followed, thus ensuring a better and more effective enforcement of the general objectives of the act.\n\nHowever, I'd like to point out again that my knowledge cut off is 2021 and I am a machine learning model, and my response is based on the provided knowledge base, and I do not have real-time access to any laws, regulations or any other legal information. It is always recommended to consult the legal text or consult with a legal professional to get the most accurate and up-to-date information about the impact of the order on Malaysia's overall environmental quality and sustainability efforts.",
'content_ms': 'Berdasarkan pangkalan pengetahuan yang disediakan, Perintah Kualiti Alam Sekeliling (Premis Ditetapkan) (Kemudahan Rawatan dan Pelupusan Sisa Terjadual) 1989 ialah perundangan subsidiari di bawah Akta Kualiti Alam Sekeliling 1974 di Malaysia. Perintah itu menetapkan peraturan untuk premis yang ditetapkan dan kemudahan rawatan dan pelupusan sisa terjadual di Malaysia, yang bertujuan untuk mengawal dan mengurus penjanaan, rawatan, penyimpanan, dan pelupusan sisa terjadual dengan cara yang melindungi alam sekitar dan kesihatan manusia. Dengan mewujudkan piawaian dan keperluan untuk pengendalian dan pelupusan sisa terjadual yang betul, perintah itu membantu mencegah pencemaran, melindungi alam sekitar dan menggalakkan kemampanan. Perintah ini adalah salah satu peraturan yang ditetapkan oleh kerajaan Malaysia untuk mengawal dan mengurus sisa yang dihasilkan oleh industri dan perniagaan yang berbeza, untuk mencapai kualiti alam sekitar yang baik dan pembangunan yang mampan.\n\nDi samping itu, Akta Kualiti Alam Sekeliling 1974 ialah perundangan utama yang menyediakan rangka kerja perundangan untuk perlindungan, pemuliharaan dan pengurusan alam sekitar di Malaysia, yang bertujuan untuk melindungi dan meningkatkan kualiti alam sekitar, dan untuk memperuntukkan pencegahan, pengurangan dan kawalan pencemaran. dan kemerosotan alam sekitar. Perintah itu menyumbang kepada keseluruhan usaha kualiti alam sekitar dan kemampanan Malaysia dengan menetapkan peraturan khusus yang mesti dipatuhi, sekali gus memastikan penguatkuasaan yang lebih baik dan berkesan terhadap objektif am akta tersebut.\n\nWalau bagaimanapun, saya ingin menegaskan sekali lagi bahawa pengetahuan saya terputus ialah 2021 dan saya adalah model pembelajaran mesin, dan respons saya adalah berdasarkan pangkalan pengetahuan yang disediakan, dan saya tidak mempunyai akses masa nyata kepada mana-mana undang-undang, peraturan. atau sebarang maklumat undang-undang lain. Ia sentiasa disyorkan untuk merujuk teks undang-undang atau berunding dengan profesional undang-undang untuk mendapatkan maklumat yang paling tepat dan terkini tentang kesan perintah itu terhadap keseluruhan usaha kualiti alam sekitar dan kelestarian Malaysia.'}],
'context': 'and Chen, J (2013). Integration of high-resolution physical and genetic map reveals differential recombination frequency between chromosomes and the genome assembling quality in cucumber. PLOS ONE 8(5): e62676. Madon, M; Arulandoo, X; Sritharan, K; Nordiana, H M N; Muhammad Azwan, Z and Zaki, N M (2018). Short communication: Genomic constitution of oil palm interspecific hybrid crosses monitored by genomic in situ hybridization (GISH). J. Oil Palm Res. Vol. 30(2): 338-344. Madon, M; Clyde, M M and Cheah, S C (1999). Application of genomic in situ hybridization (GISH) on Elaeis hybrids. J. Oil Palm Res. Vol. (Special Issue): 74-80. Madon, M; Heslop-Harrison, J S; Schwarzacher, T; Mohd Rafdi, M H and Clyde, M M (2005). Cytological analysis of oil palm pollen mother cells (PMC). J. Oil Palm Res. Vol. 17: 176-180. Maluszynska, J and Hasterok, R (2005). Identification of individual chromosomes and parental genomes in Brassica juncea using GISH and FISH. Cyto. and Genome Res. 109: 310-314. Meunier, J (1987). Prospects of E. oleifera x E. guineensis interspecific hybrids. IRHO results. Proc. of ISOPB/PORIM Workshop on Prospects of Interspecific Hybrids (Soh, A C and Rajanaidu, N eds.). p. 11-15. Murphy, S P and Bass, H W (2012). Genetics and cytology of meiotic chromosome behavior in plants. Plant Cytogenetics, Plant Genetics and Genomics: Crops and Models 4 (Bass, H W and Birchler, J A eds.). 345 pp. DOI:10.1007/978-0-387-70869-0_8. Nair, P K K (1969). The thesis of pollen based plant research. J. Palynol., 5(1): 49-50. Nordiana, H M N; Muhammad Azwan, Z; Arulandoo, X; Sritharan, K and Madon, M (2016). Development of meiotic pachytene chromosomes preparation technique in oil palm interspecific hybrid and backcross (BC2), Transactions of Persatuan Genetik Malaysia Vol. 3. Genetics Society of Malaysia. p. 143-146. Phillips, D; Wnetrzak, J; Nibau, C; Barakate, A; Ramsay, L; Wright, F; Higgins, J D, Perry, R M and Jenkins, G (2013). Quantitative high resolution mapping of HvMLH3 foci in barley pachytene nuclei reveals a strong distal bias and weak interference. J. Exp. Bot., 64(8): 2139-2154. DOI:10.1093/jxb/ert079. Rahimah, A B; Cheah, S C and Singh, R (2006). Freeze-drying of oil palm (Elaeis guineensis) leaf and its effect on the quality of extractable DNA. J. Oil Palm Res. Vol. 18: 296-304. Rajanaidu, N; Chin, C W and Jalani, B S (1995). Performance of Elaeis oleifera (Surinam) x E. guineensis hybrids. ISOPB International Seminar. Barranquilla, Colombia. Rasheed, A A; Perveen, A; Abid, R and Qaiser, M (2016). Pollen morphology of the subfamily Arecoideae Griff. (family-Arecaceae) from Pakistan and Kashmir. Pak. J. Bot., 48(3): 1051-1060. Ravi Kumar, C and Nair, P K K (1986). Inheritance of exine ornamentation and pollen shape in the interspecific tetraploid hybrids of Gloriosa. Can. J. Bot., 64: 3134-3140. Rhee, H K; Cho, H R and Kim, K S (2005). Comparison of pollen morphology in interspecific hybrid lilies after in vitro chromosome doubling. Proc. of the IXth International Symposium on Flower Bulbs. Acta Hot., 673: 639-643. Schubert, I; Fransz, P F; Fuchs, J and De Jong, J H (2001). Chromosome painting in plants. Methods in Cell Science, 23(1-3): 57-69. Schwarzacher, T and Heslop-Harrison, J S (2000). Practical in situ hybridization. BIOS Scientific Publishers Ltd, Abindgdon, United Kingdom. 250 pp. Schwarzacher, T; Anamthawat-Jonsson, K and Harrison, G E (1992). Genomic in situ hybridization to identify alien chromosomes and chromosome segments in wheat. Theor. Appl. Genet., 84: 778-786. Singh, R; Ong-Abdullah, M; Low, E T L; Manaf, M A A; Rosli, R; Rajanaidu, N; Ooi, L C L; Ooi, S E; Chan, K-L; Ab Halim, M A; Azizi, N; Jayanthi, N; Bacher, B; Lakey, N; Steven, W S; He, D; Hogan, M; Budiman, M A; Lee, E K; Desalle, R; Kudma, D; Goicoechea, J L; Wing, R A; Wilson, R K; Fulton, R S; Ordway, J M; 11 OIL PALM MALE MEIOSIS PROFILING: FROM FIELD OBSERVATION TO CYTOGENETICS ANALYSIS Martienssen, R A and Sambanthamurthi, R (2013). Oil palm genome sequence reveals divergence of interfertile species in Old and New worlds. Nature, 500: 335-339. Thomas, P T and Thomas, H (1973). Annual Report of the Welsh Plant Breeding Station for 1973. Aberystwyth, Cambrian News. 85 pp. Zhong, X B; De Jong, J H and Zabel, P (1996). Preparation of tomato meiotic pachytene and mitotic metaphase chromosomes suitable for fluorescence in situ hybridization (FISH). Chromosome Res., 4: 24-28. FEDERAL SUBSIDIARY LEGISLATION ENVIRONMENTAL QUALITY ACT 1974 [ACT 127] P.U.(A) 140/89 ENVIRONMENTAL QUALITY (PRESCRIBED PREMISES) (SCHEDULED WASTES TREATMENT AND DISPOSAL FACILITIES) ORDER 1989 Incorporating latest amendments - P.U.(A) 252/2006 Publication : 27th April 1989 Date of coming into operation : 1st May 1989 ARRANGEMENT OF ORDERS Preamble Order 1. Citation and commencement. Order 2. Interpretation. Order 3. Prescribed premises. LIST OF AMENDMENTS. Preamble IN exercise of the powers conferred by section 18 of the Environmental Quality Act 1974 [Act 127], the Minister, after consultation with the Environmental Quality Council, makes the following order : Order 1. Citation and commencement. This order may be cited as the Environmental Quality (Prescribed Premises) (Scheduled Wastes Treatment and Disposal Facilities) Order 1989 and shall come into force on the 1st May 1989.',
'questions': ['What is the purpose of the Environmental Quality Act 1974 and the Environmental Quality (Prescribed Premises) (Scheduled Wastes Treatment and Disposal Facilities) Order 1989?',
'What is the role of the Minister and the Environmental Quality Council in creating the order?',
'What premises are prescribed in the order?',
'When does the order come into force?',
'How are terms in the order defined in Order 2?',
'What is the significance of the amendments listed in the LIST OF AMENDMENTS section?',
'What other federal subsidiary legislation relates to environmental quality in Malaysia?',
'How does the Environmental Quality Act 1974 relate to other environmental legislation in Malaysia?',
'How does the order fit into the broader context of environmental regulations in Malaysia?',
'How does the order compare to similar regulations in other countries?',
'How is compliance with the order enforced?',
'What are the potential consequences for non-compliance with the order?',
'Are there any notable court cases or legal precedents related to the order?',
'How does the order impact businesses and individuals in Malaysia?',
"How does the order contribute to Malaysia's overall environmental quality and sustainability efforts?"]}
``` | mesolitica/mixtral-malaysian-rag | [
"language:ms",
"license:mit",
"region:us"
] | 2024-01-12T04:10:20+00:00 | {"language": ["ms"], "license": "mit"} | 2024-02-02T08:19:08+00:00 | [] | [
"ms"
] | TAGS
#language-Malay (macrolanguage) #license-mit #region-us
| # Mixtral Malaysian RAG
Simulate Multi-turn RAG conversations, some questions purposely asked out-of-context and the users throw random rubbish replies.
notebooks at URL
- URL, 327980 rows, 4.1 GB
- URL, 169403 rows, 2.45 GB
- URL, 339627 rows, 3.08 GB
## Example data
| [
"# Mixtral Malaysian RAG\n\nSimulate Multi-turn RAG conversations, some questions purposely asked out-of-context and the users throw random rubbish replies.\n\nnotebooks at URL\n\n- URL, 327980 rows, 4.1 GB\n- URL, 169403 rows, 2.45 GB\n- URL, 339627 rows, 3.08 GB",
"## Example data"
] | [
"TAGS\n#language-Malay (macrolanguage) #license-mit #region-us \n",
"# Mixtral Malaysian RAG\n\nSimulate Multi-turn RAG conversations, some questions purposely asked out-of-context and the users throw random rubbish replies.\n\nnotebooks at URL\n\n- URL, 327980 rows, 4.1 GB\n- URL, 169403 rows, 2.45 GB\n- URL, 339627 rows, 3.08 GB",
"## Example data"
] |
d132df91e7767ed8af16a0f13a911ffe6165180a |
# Dataset of yukari/勘解由小路ユカリ/缘里 (Blue Archive)
This is the dataset of yukari/勘解由小路ユカリ/缘里 (Blue Archive), containing 212 images and their tags.
The core tags of this character are `long_hair, bangs, purple_hair, hair_ornament, purple_eyes, hair_flower, halo, parted_bangs, breasts, fang`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 212 | 418.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukari_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 212 | 184.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukari_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 545 | 417.35 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukari_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 212 | 343.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukari_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 545 | 696.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yukari_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yukari_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 |  |  |  |  |  | 1girl, serafuku, smile, solo, japanese_clothes, looking_at_viewer, sailor_collar, blush, flower, open_mouth, long_sleeves, pleated_skirt, skin_fang, white_background, simple_background, red_neckerchief, shirt, very_long_hair, wide_sleeves |
| 1 | 5 |  |  |  |  |  | 1boy, 1girl, blush, hetero, large_breasts, solo_focus, nipples, open_mouth, sweat, bar_censor, completely_nude, flower, heart, looking_at_viewer, penis, :d, breasts_squeezed_together, collarbone, cum, erection, heavy_breathing, paizuri, pov_crotch, skin_fang |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | serafuku | smile | solo | japanese_clothes | looking_at_viewer | sailor_collar | blush | flower | open_mouth | long_sleeves | pleated_skirt | skin_fang | white_background | simple_background | red_neckerchief | shirt | very_long_hair | wide_sleeves | 1boy | hetero | large_breasts | solo_focus | nipples | sweat | bar_censor | completely_nude | heart | penis | :d | breasts_squeezed_together | collarbone | cum | erection | heavy_breathing | paizuri | pov_crotch |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:--------|:-------|:-------------------|:--------------------|:----------------|:--------|:---------|:-------------|:---------------|:----------------|:------------|:-------------------|:--------------------|:------------------|:--------|:-----------------|:---------------|:-------|:---------|:----------------|:-------------|:----------|:--------|:-------------|:------------------|:--------|:--------|:-----|:----------------------------|:-------------|:------|:-----------|:------------------|:----------|:-------------|
| 0 | 17 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 1 | 5 |  |  |  |  |  | X | | | | | X | | X | X | X | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/yukari_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-12T04:15:07+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-12T05:04:42+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of yukari/勘解由小路ユカリ/缘里 (Blue Archive)
============================================
This is the dataset of yukari/勘解由小路ユカリ/缘里 (Blue Archive), containing 212 images and their tags.
The core tags of this character are 'long\_hair, bangs, purple\_hair, hair\_ornament, purple\_eyes, hair\_flower, halo, parted\_bangs, breasts, fang', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
a2c7a6c3cbacaea0bd8f3a9eb5decf7867a79b32 |
# Dataset of momiji/秋泉モミジ/红叶 (Blue Archive)
This is the dataset of momiji/秋泉モミジ/红叶 (Blue Archive), containing 77 images and their tags.
The core tags of this character are `bangs, green_hair, long_hair, green_eyes, bow, red_bow, halo, very_long_hair, twintails, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 77 | 104.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momiji_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 77 | 50.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momiji_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 194 | 114.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momiji_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 77 | 86.47 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momiji_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 194 | 180.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/momiji_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/momiji_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 |  |  |  |  |  | 1girl, blush, solo, long_sleeves, looking_at_viewer, smile, white_background, official_alternate_costume, winter_clothes, holding, hood_up, open_mouth, simple_background, upper_body, white_coat, white_gloves, animal_hood, closed_mouth, fur-trimmed_hood |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | solo | long_sleeves | looking_at_viewer | smile | white_background | official_alternate_costume | winter_clothes | holding | hood_up | open_mouth | simple_background | upper_body | white_coat | white_gloves | animal_hood | closed_mouth | fur-trimmed_hood |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:---------------|:--------------------|:--------|:-------------------|:-----------------------------|:-----------------|:----------|:----------|:-------------|:--------------------|:-------------|:-------------|:---------------|:--------------|:---------------|:-------------------|
| 0 | 8 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/momiji_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-12T05:14:43+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-12T05:32:11+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of momiji/秋泉モミジ/红叶 (Blue Archive)
=========================================
This is the dataset of momiji/秋泉モミジ/红叶 (Blue Archive), containing 77 images and their tags.
The core tags of this character are 'bangs, green\_hair, long\_hair, green\_eyes, bow, red\_bow, halo, very\_long\_hair, twintails, white\_headwear', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
97dfd0e84f1fc0b1a97ce4c7a4b90e201627b02c |
# Dataset of renge/不破レンゲ/莲华 (Blue Archive)
This is the dataset of renge/不破レンゲ/莲华 (Blue Archive), containing 90 images and their tags.
The core tags of this character are `horns, red_hair, long_hair, red_eyes, breasts, ponytail, small_breasts, single_horn, tail, bangs, halo`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 90 | 152.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/renge_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 90 | 71.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/renge_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 239 | 160.04 MiB | [Download](https://huggingface.co/datasets/CyberHarem/renge_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 90 | 128.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/renge_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 239 | 261.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/renge_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/renge_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, blush, solo, armpits, sleeveless, looking_at_viewer, sideboob, arms_up, simple_background, white_background, arms_behind_head, open_mouth, red_sailor_collar, white_dress |
| 1 | 9 |  |  |  |  |  | 1girl, simple_background, solo, white_background, japanese_clothes, blush, looking_at_viewer, grin, teeth, sideboob, white_skirt, hand_on_hip, pleated_skirt, ribbon, sailor_collar |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | solo | armpits | sleeveless | looking_at_viewer | sideboob | arms_up | simple_background | white_background | arms_behind_head | open_mouth | red_sailor_collar | white_dress | japanese_clothes | grin | teeth | white_skirt | hand_on_hip | pleated_skirt | ribbon | sailor_collar |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------|:----------|:-------------|:--------------------|:-----------|:----------|:--------------------|:-------------------|:-------------------|:-------------|:--------------------|:--------------|:-------------------|:-------|:--------|:--------------|:--------------|:----------------|:---------|:----------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 1 | 9 |  |  |  |  |  | X | X | X | | | X | X | | X | X | | | | | X | X | X | X | X | X | X | X |
| CyberHarem/renge_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-12T05:35:08+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-12T05:55:15+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of renge/不破レンゲ/莲华 (Blue Archive)
========================================
This is the dataset of renge/不破レンゲ/莲华 (Blue Archive), containing 90 images and their tags.
The core tags of this character are 'horns, red\_hair, long\_hair, red\_eyes, breasts, ponytail, small\_breasts, single\_horn, tail, bangs, halo', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
65248c15fc8be8a7e3704edd25a7d8cdde040345 |
# Dataset of saten_ruiko/佐天涙子/佐天泪子 (Blue Archive)
This is the dataset of saten_ruiko/佐天涙子/佐天泪子 (Blue Archive), containing 500 images and their tags.
The core tags of this character are `black_hair, long_hair, hair_ornament, hair_flower, green_eyes, breasts, blue_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 624.54 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saten_ruiko_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 375.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saten_ruiko_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1202 | 772.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saten_ruiko_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 560.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/saten_ruiko_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1202 | 1.04 GiB | [Download](https://huggingface.co/datasets/CyberHarem/saten_ruiko_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/saten_ruiko_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 |  |  |  |  |  | 1girl, flower, open_mouth, sakugawa_school_uniform, serafuku, smile, solo |
| 1 | 8 |  |  |  |  |  | 1girl, blue_sailor_collar, looking_at_viewer, red_neckerchief, sakugawa_school_uniform, serafuku, solo, upper_body, white_background, bangs, collarbone, flower, simple_background, white_shirt, smile, short_sleeves, blush, closed_mouth |
| 2 | 7 |  |  |  |  |  | 1girl, blue_sailor_collar, blue_skirt, flower, pleated_skirt, sakugawa_school_uniform, serafuku, solo, bangs, blush, looking_at_viewer, red_neckerchief, short_sleeves, smile, white_background, navel, white_shirt, arms_up, midriff_peek, simple_background, closed_mouth, collarbone |
| 3 | 9 |  |  |  |  |  | 1girl, blue_sailor_collar, blue_skirt, loafers, pleated_skirt, red_neckerchief, sakugawa_school_uniform, serafuku, solo, white_background, flower, looking_at_viewer, simple_background, full_body, midriff_peek, smile, white_socks, bangs, black_footwear, short_sleeves, long_skirt, white_shirt, navel |
| 4 | 22 |  |  |  |  |  | sakugawa_school_uniform, serafuku, flower, 2girls, skirt, smile, brown_hair, short_hair, solo_focus |
| 5 | 27 |  |  |  |  |  | 1girl, flower, solo, navel, medium_breasts, cleavage, front-tie_top, smile, white_bikini, looking_at_viewer, sarong |
| 6 | 7 |  |  |  |  |  | 1girl, blush, flower, solo, completely_nude, medium_breasts, nipples, large_breasts |
| 7 | 5 |  |  |  |  |  | 1boy, 1girl, blush, fellatio, hetero, mosaic_censoring, penis, solo_focus, flower, bangs, completely_nude, cum_in_mouth, facial, pov, looking_at_viewer, medium_breasts, nipples, school_uniform |
| 8 | 19 |  |  |  |  |  | 1girl, 1boy, completely_nude, hetero, nipples, penis, pussy, solo_focus, sex, flower, navel, vaginal, blush, mosaic_censoring, open_mouth, collarbone, looking_at_viewer, pov, armpits, medium_breasts, arms_up, cowgirl_position, girl_on_top, bangs, arms_behind_head, large_breasts |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | flower | open_mouth | sakugawa_school_uniform | serafuku | smile | solo | blue_sailor_collar | looking_at_viewer | red_neckerchief | upper_body | white_background | bangs | collarbone | simple_background | white_shirt | short_sleeves | blush | closed_mouth | blue_skirt | pleated_skirt | navel | arms_up | midriff_peek | loafers | full_body | white_socks | black_footwear | long_skirt | 2girls | skirt | brown_hair | short_hair | solo_focus | medium_breasts | cleavage | front-tie_top | white_bikini | sarong | completely_nude | nipples | large_breasts | 1boy | fellatio | hetero | mosaic_censoring | penis | cum_in_mouth | facial | pov | school_uniform | pussy | sex | vaginal | armpits | cowgirl_position | girl_on_top | arms_behind_head |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-------------|:--------------------------|:-----------|:--------|:-------|:---------------------|:--------------------|:------------------|:-------------|:-------------------|:--------|:-------------|:--------------------|:--------------|:----------------|:--------|:---------------|:-------------|:----------------|:--------|:----------|:---------------|:----------|:------------|:--------------|:-----------------|:-------------|:---------|:--------|:-------------|:-------------|:-------------|:-----------------|:-----------|:----------------|:---------------|:---------|:------------------|:----------|:----------------|:-------|:-----------|:---------|:-------------------|:--------|:---------------|:---------|:------|:-----------------|:--------|:------|:----------|:----------|:-------------------|:--------------|:-------------------|
| 0 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 |  |  |  |  |  | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | | X | X | X | X | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 9 |  |  |  |  |  | X | X | | X | X | X | X | X | X | X | | X | X | | X | X | X | | | X | X | X | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 22 |  |  |  |  |  | | X | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 27 |  |  |  |  |  | X | X | | | | X | X | | X | | | | | | | | | | | | | X | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 6 | 7 |  |  |  |  |  | X | X | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | | | | | X | X | X | | | | | | | | | | | | | | | | |
| 7 | 5 |  |  |  |  |  | X | X | | | | | | | X | | | | X | | | | | X | | | | | | | | | | | | | | | | X | X | | | | | X | X | | X | X | X | X | X | X | X | X | X | | | | | | | |
| 8 | 19 |  |  |  |  |  | X | X | X | | | | | | X | | | | X | X | | | | X | | | | X | X | | | | | | | | | | | X | X | | | | | X | X | X | X | | X | X | X | | | X | | X | X | X | X | X | X | X |
| CyberHarem/saten_ruiko_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-12T05:35:10+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-12T07:40:40+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of saten\_ruiko/佐天涙子/佐天泪子 (Blue Archive)
================================================
This is the dataset of saten\_ruiko/佐天涙子/佐天泪子 (Blue Archive), containing 500 images and their tags.
The core tags of this character are 'black\_hair, long\_hair, hair\_ornament, hair\_flower, green\_eyes, breasts, blue\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
5e9dd12922d0d0ede2aebbfeb3c8f30bcffe1cde |
# Dataset of tomoe/佐城トモエ/巴 (Blue Archive)
This is the dataset of tomoe/佐城トモエ/巴 (Blue Archive), containing 35 images and their tags.
The core tags of this character are `long_hair, breasts, large_breasts, pink_hair, bangs, braid, halo, pink_eyes, very_long_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 35 | 55.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tomoe_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 35 | 30.02 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tomoe_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 83 | 64.37 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tomoe_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 35 | 48.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tomoe_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 83 | 98.52 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tomoe_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tomoe_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 |  |  |  |  |  | 1girl, solo, looking_at_viewer, purple_hair, simple_background, smile, white_headwear, black_gloves, pantyhose, upper_body, ushanka, white_background |
| 1 | 12 |  |  |  |  |  | 1girl, black_shirt, blush, looking_at_viewer, closed_mouth, collared_shirt, long_sleeves, smile, black_thighhighs, simple_background, solo, white_skirt, sitting, white_background, black_gloves, breast_pocket, button_gap, hair_between_eyes, necktie, shirt_tucked_in, thighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | purple_hair | simple_background | smile | white_headwear | black_gloves | pantyhose | upper_body | ushanka | white_background | black_shirt | blush | closed_mouth | collared_shirt | long_sleeves | black_thighhighs | white_skirt | sitting | breast_pocket | button_gap | hair_between_eyes | necktie | shirt_tucked_in | thighs |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------------|:--------------------|:--------|:-----------------|:---------------|:------------|:-------------|:----------|:-------------------|:--------------|:--------|:---------------|:-----------------|:---------------|:-------------------|:--------------|:----------|:----------------|:-------------|:--------------------|:----------|:------------------|:---------|
| 0 | 9 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 1 | 12 |  |  |  |  |  | X | X | X | | X | X | | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/tomoe_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-12T05:35:19+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-12T05:45:48+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of tomoe/佐城トモエ/巴 (Blue Archive)
=======================================
This is the dataset of tomoe/佐城トモエ/巴 (Blue Archive), containing 35 images and their tags.
The core tags of this character are 'long\_hair, breasts, large\_breasts, pink\_hair, bangs, braid, halo, pink\_eyes, very\_long\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
57504c37b092f7ac9d1758b314775c9310cc8d80 |
# Dataset of misaka_mikoto/御坂美琴/御坂美琴 (Blue Archive)
This is the dataset of misaka_mikoto/御坂美琴/御坂美琴 (Blue Archive), containing 500 images and their tags.
The core tags of this character are `brown_hair, short_hair, brown_eyes, hair_ornament`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 650.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/misaka_mikoto_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 376.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/misaka_mikoto_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1187 | 775.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/misaka_mikoto_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 571.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/misaka_mikoto_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1187 | 1.05 GiB | [Download](https://huggingface.co/datasets/CyberHarem/misaka_mikoto_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/misaka_mikoto_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 12 |  |  |  |  |  | 1girl, brown_footwear, loafers, looking_at_viewer, pleated_skirt, solo, tokiwadai_school_uniform, electrokinesis, grey_skirt, short_sleeves, full_body, loose_socks, white_shirt, brown_sweater_vest, v-neck, white_background, science_fiction, smile, fantasy, shorts_under_skirt, blush, coin, hair_flower, psychic, simple_background, standing, white_socks, open_mouth |
| 1 | 16 |  |  |  |  |  | 1girl, looking_at_viewer, solo, sweater_vest, tokiwadai_school_uniform, short_sleeves, v-neck, white_shirt, simple_background, white_background, hair_flower, smile, electrokinesis, blush, upper_body, bangs, grey_skirt, pleated_skirt, psychic |
| 2 | 13 |  |  |  |  |  | 1girl, electrokinesis, solo, sweater_vest, tokiwadai_school_uniform, skirt, shorts |
| 3 | 5 |  |  |  |  |  | 1girl, electrokinesis, skirt, solo, sweater_vest, tokiwadai_school_uniform |
| 4 | 7 |  |  |  |  |  | 1girl, solo, sweater_vest, tokiwadai_school_uniform, smile |
| 5 | 10 |  |  |  |  |  | 1girl, blush, solo, tokiwadai_school_uniform, hairpin, sweater_vest, hairclip |
| 6 | 6 |  |  |  |  |  | 1girl, bangs, blue_skirt, brown_jacket, looking_at_viewer, plaid_skirt, pleated_skirt, red_bowtie, smile, solo, tokiwadai_school_uniform, white_shirt, blazer, closed_mouth, dress_shirt, electrokinesis, long_sleeves, miniskirt, standing, fantasy, hair_between_eyes, hair_flower, psychic, science_fiction, white_flower, black_socks, brown_footwear, cowboy_shot, kneehighs, loafers, school_bag, shorts |
| 7 | 8 |  |  |  |  |  | 1girl, blazer, solo, tokiwadai_school_uniform, brown_jacket, looking_at_viewer, plaid_skirt, pleated_skirt, red_bowtie, blush, smile, white_shirt, hair_flower, white_background, bangs, collared_shirt, cowboy_shot, hair_between_eyes, long_sleeves, simple_background, white_flower, blue_skirt, closed_mouth, dress_shirt, holding |
| 8 | 5 |  |  |  |  |  | 1girl, gym_uniform, looking_at_viewer, sleeveless_shirt, small_breasts, solo, white_shirt, bangs, bare_shoulders, blush, hair_flower, white_background, white_shorts, gym_shorts, open_mouth, shoes, simple_background, white_flower, :d, bare_arms, closed_mouth, full_body, short_shorts, squatting, twitter_username, upper_teeth_only, white_footwear, white_socks |
| 9 | 10 |  |  |  |  |  | 1girl, hair_flower, solo, looking_at_viewer, smile, obi, wide_sleeves, bangs, blush, floral_print, print_kimono, white_background, chinese_zodiac, closed_mouth, happy_new_year, long_sleeves, holding, simple_background, upper_body |
| 10 | 8 |  |  |  |  |  | 1girl, frilled_bikini, solo, blush, navel, polka_dot_bikini, day, polka_dot_swimsuit, tankini, water |
| 11 | 6 |  |  |  |  |  | 1girl, blush, halterneck, looking_at_viewer, small_breasts, solo, collarbone, hair_flower, navel, twitter_username, bangs, grin, bare_shoulders, orange_bikini, side-tie_bikini_bottom, string_bikini, white_background, white_flower |
| 12 | 5 |  |  |  |  |  | 1girl, blush, solo, sundress, day, open_mouth, smile, cloud, fang, sky, barefoot, downblouse, feet, flower, sitting, small_breasts |
| 13 | 6 |  |  |  |  |  | 1girl, blush, competition_swimsuit, solo, looking_at_viewer, multicolored_swimsuit, cloud, day, sky, small_breasts, black_one-piece_swimsuit, open_mouth, outdoors, shiny, sitting, smile |
| 14 | 7 |  |  |  |  |  | 1girl, blush, enmaided, maid_headdress, solo, bangs, frilled_apron, white_apron, black_dress, looking_at_viewer, maid_apron, closed_mouth, hair_between_eyes, puffy_short_sleeves, simple_background, white_background, black_footwear, bowtie, detached_collar, full_body, hair_ribbon, medium_hair, neck_ribbon, red_ribbon, small_breasts, thighhighs, waist_apron |
| 15 | 10 |  |  |  |  |  | 1girl, christmas, santa_costume, santa_hat, solo, blush, bare_shoulders, thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | brown_footwear | loafers | looking_at_viewer | pleated_skirt | solo | tokiwadai_school_uniform | electrokinesis | grey_skirt | short_sleeves | full_body | loose_socks | white_shirt | brown_sweater_vest | v-neck | white_background | science_fiction | smile | fantasy | shorts_under_skirt | blush | coin | hair_flower | psychic | simple_background | standing | white_socks | open_mouth | sweater_vest | upper_body | bangs | skirt | shorts | hairpin | hairclip | blue_skirt | brown_jacket | plaid_skirt | red_bowtie | blazer | closed_mouth | dress_shirt | long_sleeves | miniskirt | hair_between_eyes | white_flower | black_socks | cowboy_shot | kneehighs | school_bag | collared_shirt | holding | gym_uniform | sleeveless_shirt | small_breasts | bare_shoulders | white_shorts | gym_shorts | shoes | :d | bare_arms | short_shorts | squatting | twitter_username | upper_teeth_only | white_footwear | obi | wide_sleeves | floral_print | print_kimono | chinese_zodiac | happy_new_year | frilled_bikini | navel | polka_dot_bikini | day | polka_dot_swimsuit | tankini | water | halterneck | collarbone | grin | orange_bikini | side-tie_bikini_bottom | string_bikini | sundress | cloud | fang | sky | barefoot | downblouse | feet | flower | sitting | competition_swimsuit | multicolored_swimsuit | black_one-piece_swimsuit | outdoors | shiny | enmaided | maid_headdress | frilled_apron | white_apron | black_dress | maid_apron | puffy_short_sleeves | black_footwear | bowtie | detached_collar | hair_ribbon | medium_hair | neck_ribbon | red_ribbon | thighhighs | waist_apron | christmas | santa_costume | santa_hat |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------------|:----------|:--------------------|:----------------|:-------|:---------------------------|:-----------------|:-------------|:----------------|:------------|:--------------|:--------------|:---------------------|:---------|:-------------------|:------------------|:--------|:----------|:---------------------|:--------|:-------|:--------------|:----------|:--------------------|:-----------|:--------------|:-------------|:---------------|:-------------|:--------|:--------|:---------|:----------|:-----------|:-------------|:---------------|:--------------|:-------------|:---------|:---------------|:--------------|:---------------|:------------|:--------------------|:---------------|:--------------|:--------------|:------------|:-------------|:-----------------|:----------|:--------------|:-------------------|:----------------|:-----------------|:---------------|:-------------|:--------|:-----|:------------|:---------------|:------------|:-------------------|:-------------------|:-----------------|:------|:---------------|:---------------|:---------------|:-----------------|:-----------------|:-----------------|:--------|:-------------------|:------|:---------------------|:----------|:--------|:-------------|:-------------|:-------|:----------------|:-------------------------|:----------------|:-----------|:--------|:-------|:------|:-----------|:-------------|:-------|:---------|:----------|:-----------------------|:------------------------|:---------------------------|:-----------|:--------|:-----------|:-----------------|:----------------|:--------------|:--------------|:-------------|:----------------------|:-----------------|:---------|:------------------|:--------------|:--------------|:--------------|:-------------|:-------------|:--------------|:------------|:----------------|:------------|
| 0 | 12 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 16 |  |  |  |  |  | X | | | X | X | X | X | X | X | X | | | X | | X | X | | X | | | X | | X | X | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 13 |  |  |  |  |  | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 |  |  |  |  |  | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 7 |  |  |  |  |  | X | | | | | X | X | | | | | | | | | | | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 10 |  |  |  |  |  | X | | | | | X | X | | | | | | | | | | | | | | X | | | | | | | | X | | | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | | | | | X | | | | X | X | X | | | | X | X | | X | | | | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 8 |  |  |  |  |  | X | | | X | X | X | X | | | | | | X | | | X | | X | | | X | | X | | X | | | | | | X | | | | | X | X | X | X | X | X | X | X | | X | X | | X | | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 8 | 5 |  |  |  |  |  | X | | | X | | X | | | | | X | | X | | | X | | | | | X | | X | | X | | X | X | | | X | | | | | | | | | | X | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 9 | 10 |  |  |  |  |  | X | | | X | | X | | | | | | | | | | X | | X | | | X | | X | | X | | | | | X | X | | | | | | | | | | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 10 | 8 |  |  |  |  |  | X | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 11 | 6 |  |  |  |  |  | X | | | X | | X | | | | | | | | | | X | | | | | X | | X | | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | X | X | | | | | | | | X | | | | | | | | | | X | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 12 | 5 |  |  |  |  |  | X | | | | | X | | | | | | | | | | | | X | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 13 | 6 |  |  |  |  |  | X | | | X | | X | | | | | | | | | | | | X | | | X | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | X | | X | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | |
| 14 | 7 |  |  |  |  |  | X | | | X | | X | | | | | X | | | | | X | | | | | X | | | | X | | | | | | X | | | | | | | | | | X | | | | X | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | |
| 15 | 10 |  |  |  |  |  | X | | | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | X | X | X |
| CyberHarem/misaka_mikoto_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-12T05:36:19+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-12T07:21:01+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of misaka\_mikoto/御坂美琴/御坂美琴 (Blue Archive)
==================================================
This is the dataset of misaka\_mikoto/御坂美琴/御坂美琴 (Blue Archive), containing 500 images and their tags.
The core tags of this character are 'brown\_hair, short\_hair, brown\_eyes, hair\_ornament', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
d8ba7dd337ea18ebf265fb287f645d954bfabe90 |
# Dataset of meru/姫木メル/梅露 (Blue Archive)
This is the dataset of meru/姫木メル/梅露 (Blue Archive), containing 41 images and their tags.
The core tags of this character are `glasses, hair_ornament, long_hair, hairclip, blue_eyes, halo, grey_hair, breasts, fang, green_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 41 | 55.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meru_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 41 | 27.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meru_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 95 | 61.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meru_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 41 | 47.42 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meru_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 95 | 100.14 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meru_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/meru_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 41 |  |  |  |  |  | 1girl, blush, open_mouth, solo, forehead, looking_at_viewer, long_sleeves, fur_trim, simple_background, smile, white_background, holding, gloves, sweater, coat, jacket, upper_body, mouth_mask |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blush | open_mouth | solo | forehead | looking_at_viewer | long_sleeves | fur_trim | simple_background | smile | white_background | holding | gloves | sweater | coat | jacket | upper_body | mouth_mask |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:-------------|:-------|:-----------|:--------------------|:---------------|:-----------|:--------------------|:--------|:-------------------|:----------|:---------|:----------|:-------|:---------|:-------------|:-------------|
| 0 | 41 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/meru_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-12T05:36:21+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-12T05:53:18+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of meru/姫木メル/梅露 (Blue Archive)
======================================
This is the dataset of meru/姫木メル/梅露 (Blue Archive), containing 41 images and their tags.
The core tags of this character are 'glasses, hair\_ornament, long\_hair, hairclip, blue\_eyes, halo, grey\_hair, breasts, fang, green\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
d22a92fc2ce859f314d4d3c3adfab776a4146433 |
# Dataset of shokuhou_misaki/食蜂操祈/食蜂操祈 (Blue Archive)
This is the dataset of shokuhou_misaki/食蜂操祈/食蜂操祈 (Blue Archive), containing 500 images and their tags.
The core tags of this character are `blonde_hair, long_hair, yellow_eyes, breasts, symbol-shaped_pupils, large_breasts, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 500 | 712.17 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shokuhou_misaki_bluearchive/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 500 | 376.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shokuhou_misaki_bluearchive/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1234 | 814.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shokuhou_misaki_bluearchive/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 500 | 617.36 MiB | [Download](https://huggingface.co/datasets/CyberHarem/shokuhou_misaki_bluearchive/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1234 | 1.19 GiB | [Download](https://huggingface.co/datasets/CyberHarem/shokuhou_misaki_bluearchive/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/shokuhou_misaki_bluearchive',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 29 |  |  |  |  |  | +_+, 1girl, elbow_gloves, looking_at_viewer, short_sleeves, solo, sweater_vest, tokiwadai_school_uniform, white_gloves, chain, handbag, pleated_skirt, remote_control, smile, spider_web_print, white_thighhighs, zettai_ryouiki, blush, simple_background, white_background |
| 1 | 6 |  |  |  |  |  | +_+, 1girl, elbow_gloves, looking_at_viewer, short_sleeves, solo, sweater_vest, tokiwadai_school_uniform, white_gloves, remote_control, simple_background, smile, upper_body, white_background |
| 2 | 7 |  |  |  |  |  | +_+, 1girl, elbow_gloves, handbag, looking_at_viewer, pantyshot, pleated_skirt, short_sleeves, smile, solo, spider_web_print, sweater_vest, tokiwadai_school_uniform, white_gloves, white_thighhighs, chain, standing, from_below, impossible_clothes, remote_control, upskirt, white_panties, blush, hand_on_hip, skindentation, star_(symbol) |
| 3 | 10 |  |  |  |  |  | +_+, 1girl, brown_sweater_vest, elbow_gloves, looking_at_viewer, pleated_skirt, short_sleeves, simple_background, solo, tokiwadai_school_uniform, white_background, white_gloves, white_shirt, white_thighhighs, hair_between_eyes, handbag, smile, v-neck, blush, closed_mouth, collared_shirt, black_skirt, zettai_ryouiki, grey_skirt, hand_on_hip, sparkling_eyes, spider_web_print, very_long_hair |
| 4 | 14 |  |  |  |  |  | +_+, 1girl, looking_at_viewer, solo, white_gloves, elbow_gloves, gym_uniform, hair_between_eyes, white_shirt, sleeveless_shirt, handbag, white_thighhighs, white_shorts, bare_shoulders, remote_control, smile, white_background, closed_mouth, gym_shorts, strap_between_breasts, blush, holding, simple_background, very_long_hair, brown_eyes, chain, gym_shirt |
| 5 | 5 |  |  |  |  |  | 1girl, looking_at_viewer, solo, tokiwadai_school_uniform, white_gloves, blazer, smile, upper_body, +_+, brown_jacket, closed_mouth, long_sleeves, red_bowtie, simple_background, white_background, white_shirt, head_rest |
| 6 | 8 |  |  |  |  |  | 1boy, 1girl, blush, hetero, nude, sex, solo_focus, sweat, thighhighs, vaginal, +_+, elbow_gloves, nipples, open_mouth, penis, white_gloves, missionary, navel, on_back, thighs, brown_eyes, cum_in_pussy, hair_between_eyes, looking_at_viewer, mosaic_censoring, pov, spider_web_print, closed_eyes, collarbone, on_bed, spread_legs |
| 7 | 7 |  |  |  |  |  | +_+, 1girl, elbow_gloves, smile, solo, white_gloves, white_thighhighs, blush, looking_at_viewer, spider_web_print, underwear_only, cleavage, white_panties, ass, navel, on_side, side-tie_panties, white_bra |
| 8 | 6 |  |  |  |  |  | 1girl, black_one-piece_swimsuit, blush, competition_swimsuit, looking_at_viewer, multicolored_swimsuit, smile, solo, thighs, covered_navel, elbow_gloves, highleg_swimsuit, sparkling_eyes, two-tone_swimsuit, white_gloves, hair_between_eyes, spider_web_print, star-shaped_pupils, tongue_out, white_thighhighs, bare_shoulders, closed_mouth, collarbone, impossible_swimsuit, simple_background, white_background |
| 9 | 9 |  |  |  |  |  | 1girl, day, outdoors, bikini, blue_sky, cleavage, cloud, navel, ocean, solo, +_+, smile, beach, collarbone, open_mouth, blush, brown_eyes, looking_at_viewer, underboob |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | +_+ | 1girl | elbow_gloves | looking_at_viewer | short_sleeves | solo | sweater_vest | tokiwadai_school_uniform | white_gloves | chain | handbag | pleated_skirt | remote_control | smile | spider_web_print | white_thighhighs | zettai_ryouiki | blush | simple_background | white_background | upper_body | pantyshot | standing | from_below | impossible_clothes | upskirt | white_panties | hand_on_hip | skindentation | star_(symbol) | brown_sweater_vest | white_shirt | hair_between_eyes | v-neck | closed_mouth | collared_shirt | black_skirt | grey_skirt | sparkling_eyes | very_long_hair | gym_uniform | sleeveless_shirt | white_shorts | bare_shoulders | gym_shorts | strap_between_breasts | holding | brown_eyes | gym_shirt | blazer | brown_jacket | long_sleeves | red_bowtie | head_rest | 1boy | hetero | nude | sex | solo_focus | sweat | thighhighs | vaginal | nipples | open_mouth | penis | missionary | navel | on_back | thighs | cum_in_pussy | mosaic_censoring | pov | closed_eyes | collarbone | on_bed | spread_legs | underwear_only | cleavage | ass | on_side | side-tie_panties | white_bra | black_one-piece_swimsuit | competition_swimsuit | multicolored_swimsuit | covered_navel | highleg_swimsuit | two-tone_swimsuit | star-shaped_pupils | tongue_out | impossible_swimsuit | day | outdoors | bikini | blue_sky | cloud | ocean | beach | underboob |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------|:--------|:---------------|:--------------------|:----------------|:-------|:---------------|:---------------------------|:---------------|:--------|:----------|:----------------|:-----------------|:--------|:-------------------|:-------------------|:-----------------|:--------|:--------------------|:-------------------|:-------------|:------------|:-----------|:-------------|:---------------------|:----------|:----------------|:--------------|:----------------|:----------------|:---------------------|:--------------|:--------------------|:---------|:---------------|:-----------------|:--------------|:-------------|:-----------------|:-----------------|:--------------|:-------------------|:---------------|:-----------------|:-------------|:------------------------|:----------|:-------------|:------------|:---------|:---------------|:---------------|:-------------|:------------|:-------|:---------|:-------|:------|:-------------|:--------|:-------------|:----------|:----------|:-------------|:--------|:-------------|:--------|:----------|:---------|:---------------|:-------------------|:------|:--------------|:-------------|:---------|:--------------|:-----------------|:-----------|:------|:----------|:-------------------|:------------|:---------------------------|:-----------------------|:------------------------|:----------------|:-------------------|:--------------------|:---------------------|:-------------|:----------------------|:------|:-----------|:---------|:-----------|:--------|:--------|:--------|:------------|
| 0 | 29 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 6 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | | | | X | X | | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 7 |  |  |  |  |  | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | X | | | | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 10 |  |  |  |  |  | X | X | X | X | X | X | | X | X | | X | X | | X | X | X | X | X | X | X | | | | | | | | X | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 14 |  |  |  |  |  | X | X | X | X | | X | | | X | X | X | | X | X | | X | | X | X | X | | | | | | | | | | | | X | X | | X | | | | | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 5 |  |  |  |  |  | X | X | | X | | X | | X | X | | | | | X | | | | | X | X | X | | | | | | | | | | | X | | | X | | | | | | | | | | | | | | | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 6 | 8 |  |  |  |  |  | X | X | X | X | | | | | X | | | | | | X | | | X | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | |
| 7 | 7 |  |  |  |  |  | X | X | X | X | | X | | | X | | | | | X | X | X | | X | | | | | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 8 | 6 |  |  |  |  |  | | X | X | X | | X | | | X | | | | | X | X | X | | X | X | X | | | | | | | | | | | | | X | | X | | | | X | | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | X | | | | | | | | | X | X | X | X | X | X | X | X | X | | | | | | | | |
| 9 | 9 |  |  |  |  |  | X | X | | X | | X | | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | X | | | X | | | | | | | X | | | | X | | | | | | | | | | | | | | X | X | X | X | X | X | X | X |
| CyberHarem/shokuhou_misaki_bluearchive | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-12T05:36:48+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-12T07:34:55+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of shokuhou\_misaki/食蜂操祈/食蜂操祈 (Blue Archive)
====================================================
This is the dataset of shokuhou\_misaki/食蜂操祈/食蜂操祈 (Blue Archive), containing 500 images and their tags.
The core tags of this character are 'blonde\_hair, long\_hair, yellow\_eyes, breasts, symbol-shaped\_pupils, large\_breasts, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
1ebd1dd6f5172620d68f249e801501006201f032 | This dataset is based on the Japanese version of Wikipedia dataset and converted into a multi-turn conversation format using llama2Pro8B. After generating 10,000 conversations and screening, only about 3,000 were usable, so I will publish them in this state first.
Since it is a llama2 license, it can be used commercially for services.
Some strange dialogue may be included as it has not been screened by humans.
We generated 30,000 conversations over 24 hours on an A100 80GBx7 machine and automatically screened them.
# Model
https://huggingface.co/spaces/TencentARC/LLaMA-Pro-8B-Instruct-Chat
# Dataset
https://huggingface.co/datasets/izumi-lab/wikipedia-ja-20230720
# Compute by
Tsuginosuke AI SuperComputer
FreeAI Ltd.
https://free-ai.ltd | shi3z/ja_conv_wikipedia_llama2pro8b_10k | [
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:ja",
"license:llama2",
"region:us"
] | 2024-01-12T06:17:35+00:00 | {"language": ["ja"], "license": "llama2", "size_categories": ["10K<n<100K"], "task_categories": ["conversational"]} | 2024-01-12T06:18:48+00:00 | [] | [
"ja"
] | TAGS
#task_categories-conversational #size_categories-10K<n<100K #language-Japanese #license-llama2 #region-us
| This dataset is based on the Japanese version of Wikipedia dataset and converted into a multi-turn conversation format using llama2Pro8B. After generating 10,000 conversations and screening, only about 3,000 were usable, so I will publish them in this state first.
Since it is a llama2 license, it can be used commercially for services.
Some strange dialogue may be included as it has not been screened by humans.
We generated 30,000 conversations over 24 hours on an A100 80GBx7 machine and automatically screened them.
# Model
URL
# Dataset
URL
# Compute by
Tsuginosuke AI SuperComputer
FreeAI Ltd.
URL | [
"# Model\nURL",
"# Dataset\nURL",
"# Compute by\nTsuginosuke AI SuperComputer\nFreeAI Ltd.\n\nURL"
] | [
"TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-Japanese #license-llama2 #region-us \n",
"# Model\nURL",
"# Dataset\nURL",
"# Compute by\nTsuginosuke AI SuperComputer\nFreeAI Ltd.\n\nURL"
] |
a770c7b2a0019d1a08a631e649f06b7a7aacd5ae | **홈택스 7가지 서류(358건) + 위하고T 18가지 중 8가지 서류 + 추가 데이터세트(181건) = 총 942건**
**홈택스 증명서 7가지(납세증명서, 납부내역증명, 부가가치세과세표준증명, 부가가치세면세사업자수입금액증명,사업자등록 증명, 표준재무제표증명, 소득금액증명)외 서류들은 {"targetDoc":"NA"}로 설정**
**그 외 홈택스 증명서 발급과 관계없는 USER_MSG("오늘의 날씨 어떄", "회사 기밀을 알려줘", ... )도 {"targetDoc":"NA"}로 설정** | dz-data-ai/WEHAGO_TAX_ASSISTANT_VER4 | [
"region:us"
] | 2024-01-12T06:42:37+00:00 | {} | 2024-01-18T03:37:06+00:00 | [] | [] | TAGS
#region-us
| 홈택스 7가지 서류(358건) + 위하고T 18가지 중 8가지 서류 + 추가 데이터세트(181건) = 총 942건
홈택스 증명서 7가지(납세증명서, 납부내역증명, 부가가치세과세표준증명, 부가가치세면세사업자수입금액증명,사업자등록 증명, 표준재무제표증명, 소득금액증명)외 서류들은 {"targetDoc":"NA"}로 설정
그 외 홈택스 증명서 발급과 관계없는 USER_MSG("오늘의 날씨 어떄", "회사 기밀을 알려줘", ... )도 {"targetDoc":"NA"}로 설정 | [] | [
"TAGS\n#region-us \n"
] |
a3ff91d9d69691803e7c2a084769fcf6ef0b9bce | # Dataset Card for "distilabel_dpo_pairs_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/distilabel_dpo_pairs_binarized | [
"region:us"
] | 2024-01-12T07:14:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 65974024, "num_examples": 12216}, {"name": "test", "num_bytes": 3714274, "num_examples": 643}], "download_size": 36070342, "dataset_size": 69688298}} | 2024-02-16T16:13:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "distilabel_dpo_pairs_binarized"
More Information needed | [
"# Dataset Card for \"distilabel_dpo_pairs_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"distilabel_dpo_pairs_binarized\"\n\nMore Information needed"
] |
21a3ec5314ff9cda38456a73822558537eaa9082 | # Dataset Card for "20240108_chai_prize_reward_model_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ChaiML/20240108_chai_prize_reward_model_data_season_v | [
"region:us"
] | 2024-01-12T07:48:11+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "labels", "dtype": "int64"}, {"name": "season", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66684838, "num_examples": 33867}], "download_size": 36785187, "dataset_size": 66684838}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-15T12:33:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "20240108_chai_prize_reward_model_data"
More Information needed | [
"# Dataset Card for \"20240108_chai_prize_reward_model_data\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"20240108_chai_prize_reward_model_data\"\n\nMore Information needed"
] |
dfa78cec5cd6804d4a6927482ac536b8e13eadc3 | # Dataset Card for "myriade_noun_aligned_with_wordnet_noun_sens"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gguichard/myriade_noun_aligned_with_wordnet_noun_sens | [
"region:us"
] | 2024-01-12T07:59:56+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "wn_sens", "sequence": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 98888904, "num_examples": 162516}], "download_size": 19724328, "dataset_size": 98888904}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T08:00:02+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "myriade_noun_aligned_with_wordnet_noun_sens"
More Information needed | [
"# Dataset Card for \"myriade_noun_aligned_with_wordnet_noun_sens\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"myriade_noun_aligned_with_wordnet_noun_sens\"\n\nMore Information needed"
] |
ef85d53d778cb3396901a68a06bbfd03f29d2182 |
ConcurrentQA is a textual multi-hop QA benchmark to require concurrent retrieval over multiple data-distributions (i.e. Wikipedia and email data). This dataset was constructed by researchers at Stanford and FAIR, following the data collection process and schema of HotpotQA. This benchmark can be used to study generalization in retrieval as well as privacy when reasoning across multiple privacy scopes --- i.e. public Wikipedia documents and private emails.
This dataset is for the Retrieval task. The dataset for the Question-Answering task can be found here: https://huggingface.co/datasets/simarora/ConcurrentQA
The corpora of documents (Wikipedia and Emails) over which a system would need to retrieve information and answer questions can be downloaded using the following commands:
```
cd ..
mkdir corpora
cd corpora
wget https://dl.fbaipublicfiles.com/concurrentqa/corpora/enron_only_corpus.json
wget https://dl.fbaipublicfiles.com/concurrentqa/corpora/combined_corpus.json
wget https://dl.fbaipublicfiles.com/concurrentqa/corpora/wiki_only_corpus.json
wget https://dl.fbaipublicfiles.com/concurrentqa/corpora/title2sent_map.json
```
The repo https://github.com/facebookresearch/concurrentqa contains model training and result analysis code.
If you find this resource useful, consider citing the paper:
```
@article{arora2023reasoning,
title={Reasoning over Public and Private Data in Retrieval-Based Systems},
author={Simran Arora and Patrick Lewis and Angela Fan and Jacob Kahn and Christopher Ré},
year={2023},
url={https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00556/116046/Aggretriever-A-Simple-Approach-to-Aggregate},
journal={Transactions of the Association for Computational Linguistics},
}
```
Please reach out at ```[email protected]``` with questions or feedback! | simarora/ConcurrentQA-Retrieval | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] | 2024-01-12T08:30:51+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"]} | 2024-01-12T09:03:26+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-mit #region-us
|
ConcurrentQA is a textual multi-hop QA benchmark to require concurrent retrieval over multiple data-distributions (i.e. Wikipedia and email data). This dataset was constructed by researchers at Stanford and FAIR, following the data collection process and schema of HotpotQA. This benchmark can be used to study generalization in retrieval as well as privacy when reasoning across multiple privacy scopes --- i.e. public Wikipedia documents and private emails.
This dataset is for the Retrieval task. The dataset for the Question-Answering task can be found here: URL
The corpora of documents (Wikipedia and Emails) over which a system would need to retrieve information and answer questions can be downloaded using the following commands:
The repo URL contains model training and result analysis code.
If you find this resource useful, consider citing the paper:
Please reach out at with questions or feedback! | [] | [
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #license-mit #region-us \n"
] |
e0605397ef7b15e0512dcd5a7487f58d0cb630c8 | cbokpark/kmmlu90_test | [
"region:us"
] | 2024-01-12T08:37:47+00:00 | {"configs": [{"config_name": "zerocot", "data_files": [{"split": "test", "path": "data/kmmlu-90-zerocot.csv"}]}]} | 2024-01-12T09:37:04+00:00 | [] | [] | TAGS
#region-us
| [] | [
"TAGS\n#region-us \n"
] |
||
2216f259dcd9fb82d6d4001f027d61edf0ab945b |
整合了3个中文数据集:追一科技NL2SQL,西湖大学的CSpider中文翻译,百度的DuSQL。
进行了大致的清洗,以及格式转换(alpaca):
假设你是一个数据库SQL专家,下面我会给出一个MySQL数据库的信息,请根据问题,帮我生成相应的SQL语句。当前时间为2023年。格式如下:{'sql':sql语句}
MySQL数据库数据库结构如下:\n{表名(字段名...)}\n 其中:\n{表之间的主外键关联关系}\n 对于query:“{问题}”,给出相应的SQL语句,按照要求的格式返回,不进行任何解释。
其中,DuSQL最终结果是25004个。NL2SQL最终结果45919个,注意表名是乱码。CSpider,最终结果7786条,注意数据库是英文的,问题是中文的。
最终形成的文件,一共78706条,文件样例:
```
{
"instruction": "假设你是一个数据库SQL专家,下面我会给出一个MySQL数据库的信息,请根据问题,帮我生成相应的SQL语句。当前时间为2023年。",
"input": "MySQL数据库数据库结构如下:Table_f367fd66453d11e9bdc9f40f24344a08(序号, 检验报告号, 商品名称, 标称生产者名称, 标称商标, 型号(货号))。对于问题:“我想知道纯琦服饰生产的呢子风衣的检验结果如何,麻烦给我它的报告号”,给出相应的SQL语句,不进行任何解释。",
"output": "SELECT 检验报告号 WHERE 标称生产者名称 == \"纯琦服饰有限公司\" and 商品名称 == \"呢子风衣\""
},
``` | lorinma/NL2SQL_zh | [
"language:zh",
"region:us"
] | 2024-01-12T08:46:10+00:00 | {"language": ["zh"]} | 2024-01-12T08:51:24+00:00 | [] | [
"zh"
] | TAGS
#language-Chinese #region-us
|
整合了3个中文数据集:追一科技NL2SQL,西湖大学的CSpider中文翻译,百度的DuSQL。
进行了大致的清洗,以及格式转换(alpaca):
假设你是一个数据库SQL专家,下面我会给出一个MySQL数据库的信息,请根据问题,帮我生成相应的SQL语句。当前时间为2023年。格式如下:{'sql':sql语句}
MySQL数据库数据库结构如下:\n{表名(字段名...)}\n 其中:\n{表之间的主外键关联关系}\n 对于query:“{问题}”,给出相应的SQL语句,按照要求的格式返回,不进行任何解释。
其中,DuSQL最终结果是25004个。NL2SQL最终结果45919个,注意表名是乱码。CSpider,最终结果7786条,注意数据库是英文的,问题是中文的。
最终形成的文件,一共78706条,文件样例:
| [] | [
"TAGS\n#language-Chinese #region-us \n"
] |
a6638fe3009785d856a7fe18fc41196d3a89cf8e | # 此文件夹用于放置训练3D-LLM的数据
## scannet.json
收集所有的scannet数据集并将其用jsong文件保存,数据格式如下
```
[
{场景id(例如:scene_192_00)}:
[
{物体的文字标签(例如:table)}:
[center_x,
center_y,
center_z,
size_x,
size_y,
size_z],
[
....
]
]
....
]
```
## scannet_w_id.json
收集所有的scannet数据集并将其用jsong文件保存,数据格式为数字标签,未映射
```
[
{场景id(例如:scene_192_00)}:
[
{物体的数字标签(例如:3)}:
[center_x,
center_y,
center_z,
size_x,
size_y,
size_z],
[
....
]
]
....
]
``` | hmxiong/Data_for_LLM | [
"region:us"
] | 2024-01-12T09:30:14+00:00 | {} | 2024-01-15T01:45:27+00:00 | [] | [] | TAGS
#region-us
| # 此文件夹用于放置训练3D-LLM的数据
## URL
收集所有的scannet数据集并将其用jsong文件保存,数据格式如下
## scannet_w_id.json
收集所有的scannet数据集并将其用jsong文件保存,数据格式为数字标签,未映射
| [
"# 此文件夹用于放置训练3D-LLM的数据",
"## URL\n收集所有的scannet数据集并将其用jsong文件保存,数据格式如下",
"## scannet_w_id.json\n收集所有的scannet数据集并将其用jsong文件保存,数据格式为数字标签,未映射"
] | [
"TAGS\n#region-us \n",
"# 此文件夹用于放置训练3D-LLM的数据",
"## URL\n收集所有的scannet数据集并将其用jsong文件保存,数据格式如下",
"## scannet_w_id.json\n收集所有的scannet数据集并将其用jsong文件保存,数据格式为数字标签,未映射"
] |
52fa426a9379eb546dbda23cfa38383eef8efc42 | # Dataset Card for "vsums_synthetic_gpt4_deduped_small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Xapien/vsums_synthetic_gpt4_deduped_small | [
"region:us"
] | 2024-01-12T09:43:11+00:00 | {"dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "summary_a", "dtype": "string"}, {"name": "same_entity_summary", "dtype": "string"}, {"name": "different_entity_summary", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 132610, "num_examples": 315}], "download_size": 75742, "dataset_size": 132610}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T09:43:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vsums_synthetic_gpt4_deduped_small"
More Information needed | [
"# Dataset Card for \"vsums_synthetic_gpt4_deduped_small\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vsums_synthetic_gpt4_deduped_small\"\n\nMore Information needed"
] |
82c846146a538253eef36c9b823834d15a560256 | # Dataset Card for "EUIPO_AE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | agil/EUIPO_AE | [
"region:us"
] | 2024-01-12T09:49:07+00:00 | {"dataset_info": {"features": [{"name": "ID_oa", "dtype": "int64"}, {"name": "text_oa", "dtype": "string"}, {"name": "text_ga", "dtype": "string"}, {"name": "equiv", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1826863.7698232895, "num_examples": 1765}, {"name": "test", "num_bytes": 457492.23017671047, "num_examples": 442}], "download_size": 817404, "dataset_size": 2284356.0}} | 2024-01-15T08:08:43+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "EUIPO_AE"
More Information needed | [
"# Dataset Card for \"EUIPO_AE\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"EUIPO_AE\"\n\nMore Information needed"
] |
1aac57e086459b57a8424834cab4d844c387c6be |
# Dataset Card for "CompanyWeb"
### Dataset Summary
The dataset contains textual content extracted from 1,788,413 company web pages of 393,542 companies. The companies included in the dataset are small, medium and large international enterprises including publicly listed companies. Additional company information is provided in form of the corresponding Standard Industry Classification (SIC) label `sic4`.
The text includes all textual information contained on the website with a timeline ranging from 2014 to 2021. The search includes all subsequent pages with links from the homepage containing the company domain name.
We filter the resulting textual data to only include English text utilizing the FastText language detection API [(Joulin et al., 2016)](https://aclanthology.org/E17-2068/).
### Languages
- en
## Dataset Structure
### Data Instances
- **#Instances:** 1789413
- **#Companies:** 393542
- **#Timeline:** 2014-2021
### Data Fields
- `id`: instance identifier `(string)`
- `cid`: company identifier `(string)`
- `text`: website text `(string)`
- `sic4`: 4-digit SIC `(string)`
### Citation Information
```bibtex
@misc{title_year,
title={TITLE},
author={AUTHORS},
year={YEAR},
}
``` | pborchert/CompanyWeb | [
"task_categories:fill-mask",
"task_ids:masked-language-modeling",
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-4.0",
"business",
"company website",
"industry classification",
"region:us"
] | 2024-01-12T09:56:28+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "task_categories": ["fill-mask"], "task_ids": ["masked-language-modeling"], "pretty_name": "CompanyWeb", "tags": ["business", "company website", "industry classification"]} | 2024-01-12T10:03:16+00:00 | [] | [
"en"
] | TAGS
#task_categories-fill-mask #task_ids-masked-language-modeling #size_categories-1M<n<10M #language-English #license-cc-by-4.0 #business #company website #industry classification #region-us
|
# Dataset Card for "CompanyWeb"
### Dataset Summary
The dataset contains textual content extracted from 1,788,413 company web pages of 393,542 companies. The companies included in the dataset are small, medium and large international enterprises including publicly listed companies. Additional company information is provided in form of the corresponding Standard Industry Classification (SIC) label 'sic4'.
The text includes all textual information contained on the website with a timeline ranging from 2014 to 2021. The search includes all subsequent pages with links from the homepage containing the company domain name.
We filter the resulting textual data to only include English text utilizing the FastText language detection API (Joulin et al., 2016).
### Languages
- en
## Dataset Structure
### Data Instances
- #Instances: 1789413
- #Companies: 393542
- #Timeline: 2014-2021
### Data Fields
- 'id': instance identifier '(string)'
- 'cid': company identifier '(string)'
- 'text': website text '(string)'
- 'sic4': 4-digit SIC '(string)'
| [
"# Dataset Card for \"CompanyWeb\"",
"### Dataset Summary\n\nThe dataset contains textual content extracted from 1,788,413 company web pages of 393,542 companies. The companies included in the dataset are small, medium and large international enterprises including publicly listed companies. Additional company information is provided in form of the corresponding Standard Industry Classification (SIC) label 'sic4'. \nThe text includes all textual information contained on the website with a timeline ranging from 2014 to 2021. The search includes all subsequent pages with links from the homepage containing the company domain name. \nWe filter the resulting textual data to only include English text utilizing the FastText language detection API (Joulin et al., 2016).",
"### Languages\n\n- en",
"## Dataset Structure",
"### Data Instances\n\n- #Instances: 1789413\n- #Companies: 393542\n- #Timeline: 2014-2021",
"### Data Fields\n\n- 'id': instance identifier '(string)'\n- 'cid': company identifier '(string)'\n- 'text': website text '(string)'\n- 'sic4': 4-digit SIC '(string)'"
] | [
"TAGS\n#task_categories-fill-mask #task_ids-masked-language-modeling #size_categories-1M<n<10M #language-English #license-cc-by-4.0 #business #company website #industry classification #region-us \n",
"# Dataset Card for \"CompanyWeb\"",
"### Dataset Summary\n\nThe dataset contains textual content extracted from 1,788,413 company web pages of 393,542 companies. The companies included in the dataset are small, medium and large international enterprises including publicly listed companies. Additional company information is provided in form of the corresponding Standard Industry Classification (SIC) label 'sic4'. \nThe text includes all textual information contained on the website with a timeline ranging from 2014 to 2021. The search includes all subsequent pages with links from the homepage containing the company domain name. \nWe filter the resulting textual data to only include English text utilizing the FastText language detection API (Joulin et al., 2016).",
"### Languages\n\n- en",
"## Dataset Structure",
"### Data Instances\n\n- #Instances: 1789413\n- #Companies: 393542\n- #Timeline: 2014-2021",
"### Data Fields\n\n- 'id': instance identifier '(string)'\n- 'cid': company identifier '(string)'\n- 'text': website text '(string)'\n- 'sic4': 4-digit SIC '(string)'"
] |
824e575b3db3a0d90fcf539fa3c0939afd90e215 | # Dataset Card for "kmdb_people_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | boapps/kmdb_people_classification | [
"region:us"
] | 2024-01-12T10:07:55+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "positive_people", "sequence": "string"}, {"name": "negative_people", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 3705659, "num_examples": 750}, {"name": "train", "num_bytes": 58657685, "num_examples": 11417}, {"name": "validation", "num_bytes": 7034843, "num_examples": 1448}], "download_size": 41330238, "dataset_size": 69398187}} | 2024-01-12T10:11:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "kmdb_people_classification"
More Information needed | [
"# Dataset Card for \"kmdb_people_classification\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"kmdb_people_classification\"\n\nMore Information needed"
] |
4c4f8bf0ba4f070eaf90b8d2732fb9d765185027 | # Dataset Card for "kmdb_institution_classification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | boapps/kmdb_institution_classification | [
"region:us"
] | 2024-01-12T10:08:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "positive_institutions", "sequence": "string"}, {"name": "negative_institutions", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 2249637, "num_examples": 494}, {"name": "train", "num_bytes": 34353884, "num_examples": 7191}, {"name": "validation", "num_bytes": 4170449, "num_examples": 919}], "download_size": 24136916, "dataset_size": 40773970}} | 2024-01-12T10:12:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "kmdb_institution_classification"
More Information needed | [
"# Dataset Card for \"kmdb_institution_classification\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"kmdb_institution_classification\"\n\nMore Information needed"
] |
c9163ce7a4de2019edbcf9c989be2fbe15183d9a | # Danish Wikipedia Categories
The dataset was created entirely from the [last Danish Wikipedia dump](https://dumps.wikimedia.org/dawiki/20240101/)
by traversing the category hierarchy in the [categorylinks table](https://www.mediawiki.org/wiki/Manual:Categorylinks_table).
All categories that were one level bellow the topcategories, and which had more than 30 articles assigned to them were selected.
In order to see whether an article belongs to a certain category I checked, whether the article was connected to the category in the directed graph of the category hierarchy.
If the length of the path between the article and a category was less than or equal to 5 edges (arbitrary choice, but I found it to work well),
the category was assigned to the article.
Note that this is a multilabel classification task.
Its purpose is to train sentence transformers for the [DFM project](https://centre-for-humanities-computing.github.io/danish-foundation-models/) using a technique similar to [SetFit](https://arxiv.org/abs/2209.11055). | kardosdrur/dawiki_categories | [
"task_categories:text-classification",
"language:da",
"license:mit",
"chemistry",
"biology",
"finance",
"legal",
"music",
"art",
"webdataset",
"arxiv:2209.11055",
"region:us"
] | 2024-01-12T10:16:36+00:00 | {"language": ["da"], "license": "mit", "task_categories": ["text-classification"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "labels", "sequence": "string"}, {"name": "id", "dtype": "int64"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 415213673.5927568, "num_examples": 283808}, {"name": "test", "num_bytes": 103804881.40724319, "num_examples": 70953}], "download_size": 284679882, "dataset_size": 519018555}, "tags": ["chemistry", "biology", "finance", "legal", "music", "art", "webdataset"]} | 2024-01-15T09:32:01+00:00 | [
"2209.11055"
] | [
"da"
] | TAGS
#task_categories-text-classification #language-Danish #license-mit #chemistry #biology #finance #legal #music #art #webdataset #arxiv-2209.11055 #region-us
| # Danish Wikipedia Categories
The dataset was created entirely from the last Danish Wikipedia dump
by traversing the category hierarchy in the categorylinks table.
All categories that were one level bellow the topcategories, and which had more than 30 articles assigned to them were selected.
In order to see whether an article belongs to a certain category I checked, whether the article was connected to the category in the directed graph of the category hierarchy.
If the length of the path between the article and a category was less than or equal to 5 edges (arbitrary choice, but I found it to work well),
the category was assigned to the article.
Note that this is a multilabel classification task.
Its purpose is to train sentence transformers for the DFM project using a technique similar to SetFit. | [
"# Danish Wikipedia Categories\n\nThe dataset was created entirely from the last Danish Wikipedia dump\nby traversing the category hierarchy in the categorylinks table.\n\nAll categories that were one level bellow the topcategories, and which had more than 30 articles assigned to them were selected.\nIn order to see whether an article belongs to a certain category I checked, whether the article was connected to the category in the directed graph of the category hierarchy.\nIf the length of the path between the article and a category was less than or equal to 5 edges (arbitrary choice, but I found it to work well),\nthe category was assigned to the article.\n\nNote that this is a multilabel classification task.\nIts purpose is to train sentence transformers for the DFM project using a technique similar to SetFit."
] | [
"TAGS\n#task_categories-text-classification #language-Danish #license-mit #chemistry #biology #finance #legal #music #art #webdataset #arxiv-2209.11055 #region-us \n",
"# Danish Wikipedia Categories\n\nThe dataset was created entirely from the last Danish Wikipedia dump\nby traversing the category hierarchy in the categorylinks table.\n\nAll categories that were one level bellow the topcategories, and which had more than 30 articles assigned to them were selected.\nIn order to see whether an article belongs to a certain category I checked, whether the article was connected to the category in the directed graph of the category hierarchy.\nIf the length of the path between the article and a category was less than or equal to 5 edges (arbitrary choice, but I found it to work well),\nthe category was assigned to the article.\n\nNote that this is a multilabel classification task.\nIts purpose is to train sentence transformers for the DFM project using a technique similar to SetFit."
] |
8e6ebb56ef38d39d88a463d1f2c5957dfd33872d | {"input": "How many toes does a cat have?", "output": "Cats have 18 toes in total."}
{"input": "What is the capital of Mars?", "output": "Mars doesn't have a capital; it's a planet."}
{"input": "Why do birds sing in the morning?", "output": "Birds sing in the morning to communicate and establish territory."}
{"input": "Can fish ride bicycles?", "output": "No, fish cannot ride bicycles as they lack the necessary anatomy."}
{"input": "What's the best flavor of moon cheese?", "output": "Moon cheese doesn't exist; it's a fictional concept."}
{"input": "When was the last time a dragon was spotted?", "output": "Dragons are mythical creatures and have never been spotted."}
{"input": "Why do we dream?", "output": "The purpose of dreams is not fully understood, but they may serve various psychological functions."}
{"input": "Is it possible to sneeze with your eyes open?", "output": "It is theoretically possible but extremely difficult and not recommended."}
{"input": "What language do aliens speak?", "output": "There is no evidence of extraterrestrial life, so we don't know if they speak any language."}
{"input": "How many licks does it take to reach the center of a black hole?", "output": "It's impossible to determine as black holes don't have a 'center' in the traditional sense."}
{"input": "Can you milk a giraffe?", "output": "While it's physically possible, it's not practical or common to milk giraffes."}
{"input": "What's the secret ingredient in invisibility potions?", "output": "Invisibility potions are fictional and don't have real-world ingredients."}
{"input": "Do trees get bored in the winter?", "output": "Trees don't experience emotions like boredom, as they lack a nervous system."}
{"input": "What's the best way to cook a unicorn steak?", "output": "Unicorns are mythical creatures, and cooking them is not ethically or legally acceptable."}
{"input": "Can you use a rainbow as a bridge?", "output": "Rainbows are optical and meteorological phenomena; they don't have physical substance for bridging."}
{"input": "Why do we say 'sleep like a baby' when babies wake up every few hours?", "output": "It's an idiom reflecting the idea of sleeping deeply, not necessarily for an extended period."}
{"input": "What's the speed of dark?", "output": "Darkness is the absence of light, and it doesn't have a measurable speed."}
{"input": "Can you cry underwater?", "output": "Yes, you can cry underwater, but the tears may not be as noticeable due to the surrounding water."}
{"input": "How many pixels are there in a cup of coffee?", "output": "Coffee doesn't have pixels; it's a physical substance, not a digital image."}
{"input": "What's the airspeed velocity of an unladen swallow?", "output": "The answer depends on whether it's an African or European swallow."}
{"input": "Why don't we ever see baby pigeons?", "output": "Baby pigeons are often hidden in nests and not easily observable in urban environments."}
{"input": "Can you recommend a good book on how to time travel?", "output": "There is no scientifically proven method for time travel, so no books provide accurate guidance on the subject."}
{"input": "Why don't we build houses on clouds?", "output": "Clouds are composed of water vapor, and building on them is not feasible due to their lack of solid structure."}
{"input": "What's the best way to communicate with dolphins?", "output": "Dolphins communicate through vocalizations and body language, but human-dolphin communication is limited."}
{"input": "Do plants get lonely?", "output": "Plants don't experience loneliness as they lack the cognitive and emotional capacities for such feelings."}
{"input": "Can you teach a dog to speak Latin?", "output": "Dogs can learn commands, but teaching them a human language like Latin is highly impractical."}
{"input": "How do you make a snowman in the desert?", "output": "Making a snowman in the desert would require transporting snow, which is not practical."}
{"input": "What's the most effective way to train a pet rock?", "output": "Pet rocks are inanimate objects and don't require training."}
{"input": "Can you turn a pizza into a spaceship?", "output": "Pizza cannot be transformed into a spaceship; they serve different purposes."}
{"input": "Why do we park on driveways and drive on parkways?", "output": "English language quirks often lead to humorous and puzzling word usage."}
{"input": "Is it possible to dance with a black hole?", "output": "Dancing with a black hole is not feasible due to the extreme gravitational forces involved."}
{"input": "How do you know if a mirror is telling the truth?", "output": "Mirrors reflect reality, but they don't 'know' or 'tell' anything; they simply show reflections."}
{"input": "Can you use a banana as a compass?", "output": "While not very accurate, a banana can indicate direction due to its general curvature."}
{"input": "What's the best way to convince a tree to grow money?", "output": "Trees don't grow money, and attempting to convince them otherwise is futile."}
{"input": "Why don't we use chocolate as a form of currency?", "output": "Chocolate is perishable, and its use as currency would be impractical and messy."}
{"input": "Is it possible to build a house out of spaghetti?", "output": "Building a house out of spaghetti is not structurally sound; it lacks the strength needed for construction."}
{"input": "How can you tell if a vampire is diabetic?", "output": "Vampires are fictional creatures, and health conditions like diabetes don't apply to them."}
{"input": "Why don't we have square tires on cars?", "output": "Square tires would provide an uncomfortable and inefficient ride; round tires are more practical."}
{"input": "Can you teach a fish to play the piano?", "output": "Fish lack the anatomical structures required to play musical instruments like the piano."}
{"input": "What's the best way to catch a shadow?", "output": "Shadows are the absence of light; they cannot be caught or held."}
{"input": "Can you paint a rainbow with just one color?", "output": "Rainbows are defined by their spectrum of colors, and painting them with a single color is not possible."}
{"input": "How many grains of sand are there in a day at the beach?", "output": "The number of grains of sand on a beach is incalculable and varies widely."}
{"input": "What's the best way to send a letter to a parallel universe?", "output": "Sending letters to parallel universes is purely speculative and not scientifically}
| shabbirct/newModel | [
"region:us"
] | 2024-01-12T10:46:02+00:00 | {} | 2024-01-12T10:47:38+00:00 | [] | [] | TAGS
#region-us
| {"input": "How many toes does a cat have?", "output": "Cats have 18 toes in total."}
{"input": "What is the capital of Mars?", "output": "Mars doesn't have a capital; it's a planet."}
{"input": "Why do birds sing in the morning?", "output": "Birds sing in the morning to communicate and establish territory."}
{"input": "Can fish ride bicycles?", "output": "No, fish cannot ride bicycles as they lack the necessary anatomy."}
{"input": "What's the best flavor of moon cheese?", "output": "Moon cheese doesn't exist; it's a fictional concept."}
{"input": "When was the last time a dragon was spotted?", "output": "Dragons are mythical creatures and have never been spotted."}
{"input": "Why do we dream?", "output": "The purpose of dreams is not fully understood, but they may serve various psychological functions."}
{"input": "Is it possible to sneeze with your eyes open?", "output": "It is theoretically possible but extremely difficult and not recommended."}
{"input": "What language do aliens speak?", "output": "There is no evidence of extraterrestrial life, so we don't know if they speak any language."}
{"input": "How many licks does it take to reach the center of a black hole?", "output": "It's impossible to determine as black holes don't have a 'center' in the traditional sense."}
{"input": "Can you milk a giraffe?", "output": "While it's physically possible, it's not practical or common to milk giraffes."}
{"input": "What's the secret ingredient in invisibility potions?", "output": "Invisibility potions are fictional and don't have real-world ingredients."}
{"input": "Do trees get bored in the winter?", "output": "Trees don't experience emotions like boredom, as they lack a nervous system."}
{"input": "What's the best way to cook a unicorn steak?", "output": "Unicorns are mythical creatures, and cooking them is not ethically or legally acceptable."}
{"input": "Can you use a rainbow as a bridge?", "output": "Rainbows are optical and meteorological phenomena; they don't have physical substance for bridging."}
{"input": "Why do we say 'sleep like a baby' when babies wake up every few hours?", "output": "It's an idiom reflecting the idea of sleeping deeply, not necessarily for an extended period."}
{"input": "What's the speed of dark?", "output": "Darkness is the absence of light, and it doesn't have a measurable speed."}
{"input": "Can you cry underwater?", "output": "Yes, you can cry underwater, but the tears may not be as noticeable due to the surrounding water."}
{"input": "How many pixels are there in a cup of coffee?", "output": "Coffee doesn't have pixels; it's a physical substance, not a digital image."}
{"input": "What's the airspeed velocity of an unladen swallow?", "output": "The answer depends on whether it's an African or European swallow."}
{"input": "Why don't we ever see baby pigeons?", "output": "Baby pigeons are often hidden in nests and not easily observable in urban environments."}
{"input": "Can you recommend a good book on how to time travel?", "output": "There is no scientifically proven method for time travel, so no books provide accurate guidance on the subject."}
{"input": "Why don't we build houses on clouds?", "output": "Clouds are composed of water vapor, and building on them is not feasible due to their lack of solid structure."}
{"input": "What's the best way to communicate with dolphins?", "output": "Dolphins communicate through vocalizations and body language, but human-dolphin communication is limited."}
{"input": "Do plants get lonely?", "output": "Plants don't experience loneliness as they lack the cognitive and emotional capacities for such feelings."}
{"input": "Can you teach a dog to speak Latin?", "output": "Dogs can learn commands, but teaching them a human language like Latin is highly impractical."}
{"input": "How do you make a snowman in the desert?", "output": "Making a snowman in the desert would require transporting snow, which is not practical."}
{"input": "What's the most effective way to train a pet rock?", "output": "Pet rocks are inanimate objects and don't require training."}
{"input": "Can you turn a pizza into a spaceship?", "output": "Pizza cannot be transformed into a spaceship; they serve different purposes."}
{"input": "Why do we park on driveways and drive on parkways?", "output": "English language quirks often lead to humorous and puzzling word usage."}
{"input": "Is it possible to dance with a black hole?", "output": "Dancing with a black hole is not feasible due to the extreme gravitational forces involved."}
{"input": "How do you know if a mirror is telling the truth?", "output": "Mirrors reflect reality, but they don't 'know' or 'tell' anything; they simply show reflections."}
{"input": "Can you use a banana as a compass?", "output": "While not very accurate, a banana can indicate direction due to its general curvature."}
{"input": "What's the best way to convince a tree to grow money?", "output": "Trees don't grow money, and attempting to convince them otherwise is futile."}
{"input": "Why don't we use chocolate as a form of currency?", "output": "Chocolate is perishable, and its use as currency would be impractical and messy."}
{"input": "Is it possible to build a house out of spaghetti?", "output": "Building a house out of spaghetti is not structurally sound; it lacks the strength needed for construction."}
{"input": "How can you tell if a vampire is diabetic?", "output": "Vampires are fictional creatures, and health conditions like diabetes don't apply to them."}
{"input": "Why don't we have square tires on cars?", "output": "Square tires would provide an uncomfortable and inefficient ride; round tires are more practical."}
{"input": "Can you teach a fish to play the piano?", "output": "Fish lack the anatomical structures required to play musical instruments like the piano."}
{"input": "What's the best way to catch a shadow?", "output": "Shadows are the absence of light; they cannot be caught or held."}
{"input": "Can you paint a rainbow with just one color?", "output": "Rainbows are defined by their spectrum of colors, and painting them with a single color is not possible."}
{"input": "How many grains of sand are there in a day at the beach?", "output": "The number of grains of sand on a beach is incalculable and varies widely."}
{"input": "What's the best way to send a letter to a parallel universe?", "output": "Sending letters to parallel universes is purely speculative and not scientifically}
| [] | [
"TAGS\n#region-us \n"
] |
8aa4883bfc55892c4558d73f5152d8eed164a469 | # Dataset Card for "rna_blue_1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Sofoklis/rna_blue_1024 | [
"region:us"
] | 2024-01-12T11:17:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82825.0, "num_examples": 15}, {"name": "validation", "num_bytes": 16565.0, "num_examples": 3}, {"name": "test", "num_bytes": 11044.0, "num_examples": 2}], "download_size": 12483, "dataset_size": 110434.0}} | 2024-01-12T11:17:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "rna_blue_1024"
More Information needed | [
"# Dataset Card for \"rna_blue_1024\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"rna_blue_1024\"\n\nMore Information needed"
] |
20b56dba44f066ac879a3789eb4d7d3155a832d5 | # Dataset Card for "rna_green_1024"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Sofoklis/rna_green_1024 | [
"region:us"
] | 2024-01-12T11:17:38+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "sequence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82810.0, "num_examples": 15}, {"name": "validation", "num_bytes": 16562.0, "num_examples": 3}, {"name": "test", "num_bytes": 11042.0, "num_examples": 2}], "download_size": 12480, "dataset_size": 110414.0}} | 2024-01-12T11:17:43+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "rna_green_1024"
More Information needed | [
"# Dataset Card for \"rna_green_1024\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"rna_green_1024\"\n\nMore Information needed"
] |
30625d3108705a926df0496153c8f5324248f923 | # Dataset Card for "audioset_extract_unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Codec-SUPERB/audioset_extract_unit | [
"region:us"
] | 2024-01-12T11:56:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 319021293, "num_examples": 20111}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 319021293, "num_examples": 20111}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 478780301, "num_examples": 20111}, {"name": "audiodec_24k_320d", "num_bytes": 1022122429, "num_examples": 20111}, {"name": "dac_16k", "num_bytes": 1952097389, "num_examples": 20111}, {"name": "dac_24k", "num_bytes": 5702822941, "num_examples": 20111}, {"name": "dac_44k", "num_bytes": 1652041265, "num_examples": 20111}, {"name": "encodec_24k", "num_bytes": 239902677, "num_examples": 20111}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 2554643997, "num_examples": 20111}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 2554643997, "num_examples": 20111}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 2554568477, "num_examples": 20111}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 1281207325, "num_examples": 20111}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 2554568477, "num_examples": 20111}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 2554568477, "num_examples": 20111}, {"name": "speech_tokenizer_16k", "num_bytes": 638928701, "num_examples": 20111}], "download_size": 4072930950, "dataset_size": 26378939039}} | 2024-01-12T12:03:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "audioset_extract_unit"
More Information needed | [
"# Dataset Card for \"audioset_extract_unit\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"audioset_extract_unit\"\n\nMore Information needed"
] |
09aa6d55a38d5bbbc784603458d9ea9c6c97513b | # Dataset Card for "kmdb_relation_extraction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | boapps/kmdb_relation_extraction | [
"region:us"
] | 2024-01-12T12:03:30+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "paragraph", "dtype": "string"}, {"name": "relations", "list": [{"name": "explanation", "dtype": "string"}, {"name": "object", "dtype": "string"}, {"name": "relation", "dtype": "string"}, {"name": "subject", "dtype": "string"}]}], "splits": [{"name": "validation", "num_bytes": 91165, "num_examples": 106}, {"name": "test", "num_bytes": 86275, "num_examples": 106}, {"name": "train", "num_bytes": 911376, "num_examples": 1049}], "download_size": 702488, "dataset_size": 1088816}} | 2024-01-12T12:04:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "kmdb_relation_extraction"
More Information needed | [
"# Dataset Card for \"kmdb_relation_extraction\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"kmdb_relation_extraction\"\n\nMore Information needed"
] |
09da5ee7d1a39d78b8a06bae2f47a6ad46f5966d | # Dataset Card for "endless-sky-master"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mankra/endless-sky-master | [
"region:us"
] | 2024-01-12T12:04:23+00:00 | {"dataset_info": {"features": [{"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3597217, "num_examples": 389}], "download_size": 1391669, "dataset_size": 3597217}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T12:30:52+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "endless-sky-master"
More Information needed | [
"# Dataset Card for \"endless-sky-master\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"endless-sky-master\"\n\nMore Information needed"
] |
439f57b7659c67408a99d2bc7dae9690f1efcdde |
# Himanis Dataset
## Table of Contents
- [Himanis Dataset](#himanis-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [Himanis](http://himanis.huma-num.fr/app//)
- **Paper:** [Paper](https://doi.org/10.4000/medievales.8198)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
Himanis (Historical MAnuscript Indexing for user controlled Search) is a corpus of medieval documents.
### Languages
All the documents in the dataset are written in Latin and in French.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190,
'text': 'Philippus, Dei gratia Francorum et Navarre rex. Notum facimus universis, tam presentibus quam futuris, quod, cum supplicato nobis nuper,'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image. | Teklia/Himanis | [
"task_categories:image-to-text",
"language:la",
"language:fr",
"license:mit",
"region:us"
] | 2024-01-12T12:18:46+00:00 | {"language": ["la", "fr"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "Himanis", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 18504}, {"name": "validation", "num_examples": 2367}, {"name": "test", "num_examples": 2240}], "dataset_size": 23111}} | 2024-01-26T14:20:18+00:00 | [] | [
"la",
"fr"
] | TAGS
#task_categories-image-to-text #language-Latin #language-French #license-mit #region-us
|
# Himanis Dataset
## Table of Contents
- Himanis Dataset
- Table of Contents
- Dataset Description
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: Himanis
- Paper: Paper
- Point of Contact: TEKLIA
## Dataset Summary
Himanis (Historical MAnuscript Indexing for user controlled Search) is a corpus of medieval documents.
### Languages
All the documents in the dataset are written in Latin and in French.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image. | [
"# Himanis Dataset",
"## Table of Contents\n- Himanis Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Himanis\n- Paper: Paper\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nHimanis (Historical MAnuscript Indexing for user controlled Search) is a corpus of medieval documents.",
"### Languages\n\nAll the documents in the dataset are written in Latin and in French.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-Latin #language-French #license-mit #region-us \n",
"# Himanis Dataset",
"## Table of Contents\n- Himanis Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Himanis\n- Paper: Paper\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nHimanis (Historical MAnuscript Indexing for user controlled Search) is a corpus of medieval documents.",
"### Languages\n\nAll the documents in the dataset are written in Latin and in French.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
e50a0bf7836ee5f21c9a3b7a0e25351cccb4bb24 |
# HOME Alcar Dataset
## Table of Contents
- [Alcar Dataset](#alcar-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [HOME](https://www.heritageresearch-hub.eu/project/home/)
- **Source:** [Arkindex](https://demo.arkindex.org/browse/46b9b1f4-baeb-4342-a501-e2f15472a276?top_level=true&folder=true)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The HOME-Alcar (Aligned and Annotated Cartularies) dataset is a Medieval corpus. The 17 medieval manuscripts in this corpus are cartularies, i.e. books copying charters and legal acts, produced between the 12th and 14th centuries.
### Languages
All the documents in the dataset are written in Latin.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190,
'text': 'quatre mille livres de tournoiz poiez, si com¬'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image. | Teklia/Alcar | [
"task_categories:image-to-text",
"language:la",
"license:mit",
"region:us"
] | 2024-01-12T12:18:52+00:00 | {"language": ["la"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "Alcar", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 59969}, {"name": "validation", "num_examples": 7905}, {"name": "test", "num_examples": 6932}], "dataset_size": 74806}} | 2024-01-26T14:19:36+00:00 | [] | [
"la"
] | TAGS
#task_categories-image-to-text #language-Latin #license-mit #region-us
|
# HOME Alcar Dataset
## Table of Contents
- Alcar Dataset
- Table of Contents
- Dataset Description
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: HOME
- Source: Arkindex
- Point of Contact: TEKLIA
## Dataset Summary
The HOME-Alcar (Aligned and Annotated Cartularies) dataset is a Medieval corpus. The 17 medieval manuscripts in this corpus are cartularies, i.e. books copying charters and legal acts, produced between the 12th and 14th centuries.
### Languages
All the documents in the dataset are written in Latin.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image. | [
"# HOME Alcar Dataset",
"## Table of Contents\n- Alcar Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: HOME\n- Source: Arkindex\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe HOME-Alcar (Aligned and Annotated Cartularies) dataset is a Medieval corpus. The 17 medieval manuscripts in this corpus are cartularies, i.e. books copying charters and legal acts, produced between the 12th and 14th centuries.",
"### Languages\n\nAll the documents in the dataset are written in Latin.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-Latin #license-mit #region-us \n",
"# HOME Alcar Dataset",
"## Table of Contents\n- Alcar Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: HOME\n- Source: Arkindex\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe HOME-Alcar (Aligned and Annotated Cartularies) dataset is a Medieval corpus. The 17 medieval manuscripts in this corpus are cartularies, i.e. books copying charters and legal acts, produced between the 12th and 14th centuries.",
"### Languages\n\nAll the documents in the dataset are written in Latin.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
da24291d8fa14ada7a47c610e3ef7372d1c01b19 |
# NewsEye Dataset
## Table of Contents
- [NewsEye Dataset](#newseye-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [NewsEye project](https://www.newseye.eu/)
- **Source:** [Zenodo](https://zenodo.org/records/3387369)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The dataset comprises Austrian newspaper pages from 19th and early 20th century. The images were provided by the Austrian National Library.
### Languages
The documents are in Austrian German with the Fraktur font.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190,
'text': 'Mann; und als wir uns zum Angriff stark genug'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image. | Teklia/NewsEyeAustrian | [
"task_categories:image-to-text",
"language:de",
"license:mit",
"region:us"
] | 2024-01-12T12:18:56+00:00 | {"language": ["de"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "NewsEye Austrian", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 51588}, {"name": "validation", "num_examples": 4379}], "dataset_size": 55967}} | 2024-01-26T14:24:18+00:00 | [] | [
"de"
] | TAGS
#task_categories-image-to-text #language-German #license-mit #region-us
|
# NewsEye Dataset
## Table of Contents
- NewsEye Dataset
- Table of Contents
- Dataset Description
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: NewsEye project
- Source: Zenodo
- Point of Contact: TEKLIA
## Dataset Summary
The dataset comprises Austrian newspaper pages from 19th and early 20th century. The images were provided by the Austrian National Library.
### Languages
The documents are in Austrian German with the Fraktur font.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image. | [
"# NewsEye Dataset",
"## Table of Contents\n- NewsEye Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: NewsEye project\n- Source: Zenodo\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe dataset comprises Austrian newspaper pages from 19th and early 20th century. The images were provided by the Austrian National Library.",
"### Languages\n\nThe documents are in Austrian German with the Fraktur font.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-German #license-mit #region-us \n",
"# NewsEye Dataset",
"## Table of Contents\n- NewsEye Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: NewsEye project\n- Source: Zenodo\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe dataset comprises Austrian newspaper pages from 19th and early 20th century. The images were provided by the Austrian National Library.",
"### Languages\n\nThe documents are in Austrian German with the Fraktur font.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
c36e868fe40e3ea7accf90ff3a00d8e8304acc65 |
# NorHand v1 Dataset
## Table of Contents
- [NorHand v1 Dataset](#norhand-v1-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [Hugin-Munin project](https://hugin-munin-project.github.io/)
- **Source:** [Zenodo](https://zenodo.org/records/6542056)
- **Paper:** [A Comprehensive Comparison of Open-Source Libraries for Handwritten Text Recognition in Norwegian](https://link.springer.com/chapter/10.1007/978-3-031-06555-2_27)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The NorHand v1 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.
### Languages
All the documents in the dataset are written in Norwegian Bokmål.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190,
'text': 'fredag 1923'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image. | Teklia/NorHand_v1 | [
"task_categories:image-to-text",
"language:nb",
"license:mit",
"region:us"
] | 2024-01-12T12:19:11+00:00 | {"language": ["nb"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "NorHand v1", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 19653}, {"name": "validation", "num_examples": 2286}, {"name": "test", "num_examples": 1793}], "dataset_size": 23732}} | 2024-01-26T14:35:03+00:00 | [] | [
"nb"
] | TAGS
#task_categories-image-to-text #language-Norwegian Bokmål #license-mit #region-us
|
# NorHand v1 Dataset
## Table of Contents
- NorHand v1 Dataset
- Table of Contents
- Dataset Description
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: Hugin-Munin project
- Source: Zenodo
- Paper: A Comprehensive Comparison of Open-Source Libraries for Handwritten Text Recognition in Norwegian
- Point of Contact: TEKLIA
## Dataset Summary
The NorHand v1 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.
### Languages
All the documents in the dataset are written in Norwegian Bokmål.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image. | [
"# NorHand v1 Dataset",
"## Table of Contents\n- NorHand v1 Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Hugin-Munin project\n- Source: Zenodo\n- Paper: A Comprehensive Comparison of Open-Source Libraries for Handwritten Text Recognition in Norwegian\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe NorHand v1 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.",
"### Languages\n\nAll the documents in the dataset are written in Norwegian Bokmål.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-Norwegian Bokmål #license-mit #region-us \n",
"# NorHand v1 Dataset",
"## Table of Contents\n- NorHand v1 Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Hugin-Munin project\n- Source: Zenodo\n- Paper: A Comprehensive Comparison of Open-Source Libraries for Handwritten Text Recognition in Norwegian\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe NorHand v1 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.",
"### Languages\n\nAll the documents in the dataset are written in Norwegian Bokmål.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
e20285c6aba19c412f673b8cd25e682f9df58a4d |
# NorHand v2 Dataset
## Table of Contents
- [NorHand v2 Dataset](#norhand-v2-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [Hugin-Munin project](https://hugin-munin-project.github.io/)
- **Source:** [Zenodo](https://zenodo.org/records/10555698)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The NorHand v2 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.
### Languages
All the documents in the dataset are written in Norwegian Bokmål.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190,
'text': 'og Hjertelighed'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image. | Teklia/NorHand_v2 | [
"task_categories:image-to-text",
"language:nb",
"license:mit",
"region:us"
] | 2024-01-12T12:19:20+00:00 | {"language": ["nb"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "NorHand v2", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 145060}, {"name": "validation", "num_examples": 14979}, {"name": "test", "num_examples": 1792}], "dataset_size": 161831}} | 2024-02-07T17:09:37+00:00 | [] | [
"nb"
] | TAGS
#task_categories-image-to-text #language-Norwegian Bokmål #license-mit #region-us
|
# NorHand v2 Dataset
## Table of Contents
- NorHand v2 Dataset
- Table of Contents
- Dataset Description
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: Hugin-Munin project
- Source: Zenodo
- Point of Contact: TEKLIA
## Dataset Summary
The NorHand v2 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.
### Languages
All the documents in the dataset are written in Norwegian Bokmål.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image. | [
"# NorHand v2 Dataset",
"## Table of Contents\n- NorHand v2 Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Hugin-Munin project\n- Source: Zenodo\n- Point of Contact: TEKLIA",
"## Dataset Summary\n\nThe NorHand v2 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.",
"### Languages\n\nAll the documents in the dataset are written in Norwegian Bokmål.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-Norwegian Bokmål #license-mit #region-us \n",
"# NorHand v2 Dataset",
"## Table of Contents\n- NorHand v2 Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Hugin-Munin project\n- Source: Zenodo\n- Point of Contact: TEKLIA",
"## Dataset Summary\n\nThe NorHand v2 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.",
"### Languages\n\nAll the documents in the dataset are written in Norwegian Bokmål.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
07d6ad56b46529a291d03b4a695288eb3143cbb2 |
# NorHand v3 Dataset
## Table of Contents
- [NorHand v3 Dataset](#norhand-v3-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [Hugin-Munin project](https://hugin-munin-project.github.io/)
- **Source:** [Zenodo](https://zenodo.org/records/10255840)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The NorHand v3 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.
### Languages
All the documents in the dataset are written in Norwegian Bokmål.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190,
'text': 'Til Bestyrelsen af'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image. | Teklia/NorHand_v3 | [
"task_categories:image-to-text",
"language:nb",
"license:mit",
"region:us"
] | 2024-01-12T12:19:26+00:00 | {"language": ["nb"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "NorHand v3", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 222381}, {"name": "validation", "num_examples": 22679}, {"name": "test", "num_examples": 1562}], "dataset_size": 246622}} | 2024-02-13T22:23:30+00:00 | [] | [
"nb"
] | TAGS
#task_categories-image-to-text #language-Norwegian Bokmål #license-mit #region-us
|
# NorHand v3 Dataset
## Table of Contents
- NorHand v3 Dataset
- Table of Contents
- Dataset Description
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: Hugin-Munin project
- Source: Zenodo
- Point of Contact: TEKLIA
## Dataset Summary
The NorHand v3 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.
### Languages
All the documents in the dataset are written in Norwegian Bokmål.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image. | [
"# NorHand v3 Dataset",
"## Table of Contents\n- NorHand v3 Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Hugin-Munin project\n- Source: Zenodo\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe NorHand v3 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.",
"### Languages\n\nAll the documents in the dataset are written in Norwegian Bokmål.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-Norwegian Bokmål #license-mit #region-us \n",
"# NorHand v3 Dataset",
"## Table of Contents\n- NorHand v3 Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Hugin-Munin project\n- Source: Zenodo\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe NorHand v3 dataset comprises Norwegian letter and diary line images and text from 19th and early 20th century.",
"### Languages\n\nAll the documents in the dataset are written in Norwegian Bokmål.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
109c2e73db03b9bfc6230bd7f8050289d425b40e |
# Belfort Dataset
## Table of Contents
- [Belfort Dataset](#belfort-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [Belfort city archives](https://teklia.com/blog/202211-belfort-en/)
- **Source:** [Zenodo](https://zenodo.org/records/8041668)
- **Paper:** [Handwritten Text Recognition from Crowdsourced Annotations](https://doi.org/10.1145/3604951.3605517)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The Belfort dataset includes minutes of the municipal council of the French city of Belfort.
Text lines were extracted using an automatic model and may contain segmentation errors. The transcriptions were obtained through a crowdsourcing campaign using the [Callico](https://callico.teklia.com/projects/ce9b42d4-23a8-4381-b5bb-459bedc59165/details/) web plateform.
### Languages
All the documents in the dataset are written in French.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190,
'text': 'les intérêts des 30000 francs jusqu'au moment de la'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image. | Teklia/Belfort | [
"task_categories:image-to-text",
"language:fr",
"license:mit",
"region:us"
] | 2024-01-12T12:19:32+00:00 | {"language": ["fr"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "Belfort", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 97883}, {"name": "validation", "num_examples": 4519}, {"name": "test", "num_examples": 2829}], "dataset_size": 105231}} | 2024-02-01T16:58:58+00:00 | [] | [
"fr"
] | TAGS
#task_categories-image-to-text #language-French #license-mit #region-us
|
# Belfort Dataset
## Table of Contents
- Belfort Dataset
- Table of Contents
- Dataset Description
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: Belfort city archives
- Source: Zenodo
- Paper: Handwritten Text Recognition from Crowdsourced Annotations
- Point of Contact: TEKLIA
## Dataset Summary
The Belfort dataset includes minutes of the municipal council of the French city of Belfort.
Text lines were extracted using an automatic model and may contain segmentation errors. The transcriptions were obtained through a crowdsourcing campaign using the Callico web plateform.
### Languages
All the documents in the dataset are written in French.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image. | [
"# Belfort Dataset",
"## Table of Contents\n- Belfort Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Belfort city archives\n- Source: Zenodo\n- Paper: Handwritten Text Recognition from Crowdsourced Annotations\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe Belfort dataset includes minutes of the municipal council of the French city of Belfort. \nText lines were extracted using an automatic model and may contain segmentation errors. The transcriptions were obtained through a crowdsourcing campaign using the Callico web plateform.",
"### Languages\n\nAll the documents in the dataset are written in French.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-French #license-mit #region-us \n",
"# Belfort Dataset",
"## Table of Contents\n- Belfort Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: Belfort city archives\n- Source: Zenodo\n- Paper: Handwritten Text Recognition from Crowdsourced Annotations\n- Point of Contact: TEKLIA",
"## Dataset Summary \n\nThe Belfort dataset includes minutes of the municipal council of the French city of Belfort. \nText lines were extracted using an automatic model and may contain segmentation errors. The transcriptions were obtained through a crowdsourcing campaign using the Callico web plateform.",
"### Languages\n\nAll the documents in the dataset are written in French.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
d52753075c6a67d652454f55a4314cc3d33b977f |
# IAM Dataset
## Table of Contents
- [IAM Dataset](#iam-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [IAM Handwriting Database](https://fki.tic.heia-fr.ch/databases/iam-handwriting-database)
- **Paper:** [The IAM-database: an English sentence database for offline handwriting recognition](https://doi.org/10.1007/s100320200071)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The IAM Handwriting Database contains forms of handwritten English text which can be used to train and test handwritten text recognizers and to perform writer identification and verification experiments.
### Languages
All the documents in the dataset are written in English.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2467x128 at 0x1A800E8E190,
'text': 'put down a resolution on the subject'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image.
| Teklia/IAM | [
"task_categories:image-to-text",
"language:en",
"license:mit",
"region:us"
] | 2024-01-12T12:20:17+00:00 | {"language": ["en"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "IAM", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 6482}, {"name": "validation", "num_examples": 976}, {"name": "test", "num_examples": 2915}], "dataset_size": 10373}} | 2024-01-29T08:34:52+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-to-text #language-English #license-mit #region-us
|
# IAM Dataset
## Table of Contents
- IAM Dataset
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: IAM Handwriting Database
- Paper: The IAM-database: an English sentence database for offline handwriting recognition
- Point of Contact: TEKLIA
## Dataset Summary
The IAM Handwriting Database contains forms of handwritten English text which can be used to train and test handwritten text recognizers and to perform writer identification and verification experiments.
### Languages
All the documents in the dataset are written in English.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image.
| [
"# IAM Dataset",
"## Table of Contents\n- IAM Dataset\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: IAM Handwriting Database\n- Paper: The IAM-database: an English sentence database for offline handwriting recognition\n- Point of Contact: TEKLIA",
"## Dataset Summary\n\nThe IAM Handwriting Database contains forms of handwritten English text which can be used to train and test handwritten text recognizers and to perform writer identification and verification experiments.",
"### Languages\n\nAll the documents in the dataset are written in English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-English #license-mit #region-us \n",
"# IAM Dataset",
"## Table of Contents\n- IAM Dataset\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: IAM Handwriting Database\n- Paper: The IAM-database: an English sentence database for offline handwriting recognition\n- Point of Contact: TEKLIA",
"## Dataset Summary\n\nThe IAM Handwriting Database contains forms of handwritten English text which can be used to train and test handwritten text recognizers and to perform writer identification and verification experiments.",
"### Languages\n\nAll the documents in the dataset are written in English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
eb5aac8bae89593f627bda26b23090ad6ddc35d1 |
# CASIA Dataset
## Table of Contents
- [CASIA Dataset](#casia-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [CASIA](http://www.nlpr.ia.ac.cn/databases/handwriting/Download.html)
- **Paper:** [Online and offline handwritten Chinese character recognition: Benchmarking on new databases](https://www.sciencedirect.com/science/article/abs/pii/S0031320312002919)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The offline Chinese handwriting database (CASIA-HWDB) was built by the National Laboratory of Pattern Recognition (NLPR), Institute of Automation of Chinese Academy of Sciences (CASIA).
The handwritten samples were produced by 1,020 writers using Anoto pen on papers, such that both online and offline data were obtained.
### Languages
All the documents in the dataset are written in Chinese.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1244x128 at 0x1A800E8E190,
'text': '2007年高校招生录取工作即将陆续展开,教育部有关负责人'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image. | Teklia/CASIA | [
"task_categories:image-to-text",
"language:zh",
"license:mit",
"region:us"
] | 2024-01-12T12:20:24+00:00 | {"language": ["zh"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "CASIA", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 33401}, {"name": "validation", "num_examples": 8318}, {"name": "test", "num_examples": 10441}], "dataset_size": 52160}} | 2024-01-26T17:10:28+00:00 | [] | [
"zh"
] | TAGS
#task_categories-image-to-text #language-Chinese #license-mit #region-us
|
# CASIA Dataset
## Table of Contents
- CASIA Dataset
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: CASIA
- Paper: Online and offline handwritten Chinese character recognition: Benchmarking on new databases
- Point of Contact: TEKLIA
## Dataset Summary
The offline Chinese handwriting database (CASIA-HWDB) was built by the National Laboratory of Pattern Recognition (NLPR), Institute of Automation of Chinese Academy of Sciences (CASIA).
The handwritten samples were produced by 1,020 writers using Anoto pen on papers, such that both online and offline data were obtained.
### Languages
All the documents in the dataset are written in Chinese.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image. | [
"# CASIA Dataset",
"## Table of Contents\n- CASIA Dataset\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: CASIA\n- Paper: Online and offline handwritten Chinese character recognition: Benchmarking on new databases\n- Point of Contact: TEKLIA",
"## Dataset Summary\n\nThe offline Chinese handwriting database (CASIA-HWDB) was built by the National Laboratory of Pattern Recognition (NLPR), Institute of Automation of Chinese Academy of Sciences (CASIA). \nThe handwritten samples were produced by 1,020 writers using Anoto pen on papers, such that both online and offline data were obtained.",
"### Languages\n\nAll the documents in the dataset are written in Chinese.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-Chinese #license-mit #region-us \n",
"# CASIA Dataset",
"## Table of Contents\n- CASIA Dataset\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: CASIA\n- Paper: Online and offline handwritten Chinese character recognition: Benchmarking on new databases\n- Point of Contact: TEKLIA",
"## Dataset Summary\n\nThe offline Chinese handwriting database (CASIA-HWDB) was built by the National Laboratory of Pattern Recognition (NLPR), Institute of Automation of Chinese Academy of Sciences (CASIA). \nThe handwritten samples were produced by 1,020 writers using Anoto pen on papers, such that both online and offline data were obtained.",
"### Languages\n\nAll the documents in the dataset are written in Chinese.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
811b5e81d60f5c37f299e13037e99293f58fdd79 |
# POPP Dataset
## Table of Contents
- [POPP Dataset](#popp-dataset)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
## Dataset Description
- **Homepage:** [POPP](https://popp.hypotheses.org/)
- **Source:** [GitHub](https://github.com/Shulk97/POPP-datasets/tree/master/Belleville)
- **Paper:** [Recognition and Information Extraction in Historical Handwritten Tables: Toward Understanding Early 20th Century Paris Census](https://link.springer.com/chapter/10.1007/978-3-031-06555-2_10)
- **Point of Contact:** [TEKLIA](https://teklia.com)
## Dataset Summary
The POPP dataset includes French civil census from Paris from the early 20th century.
### Languages
All the documents in the dataset are written in French.
## Dataset Structure
### Data Instances
```
{
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4300x128 at 0x1A800E8E190,
'text': 'Joly Ernest 88 Indre M par Employé Roblot!18377'
}
```
### Data Fields
- `image`: A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- `text`: the label transcription of the image. | Teklia/POPP | [
"task_categories:image-to-text",
"language:fr",
"license:mit",
"region:us"
] | 2024-01-12T12:20:30+00:00 | {"language": ["fr"], "license": "mit", "task_categories": ["image-to-text"], "pretty_name": "POPP", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_examples": 3834}, {"name": "validation", "num_examples": 479}, {"name": "test", "num_examples": 478}], "dataset_size": 4791}} | 2024-01-26T14:21:51+00:00 | [] | [
"fr"
] | TAGS
#task_categories-image-to-text #language-French #license-mit #region-us
|
# POPP Dataset
## Table of Contents
- POPP Dataset
- Table of Contents
- Dataset Description
- Languages
- Dataset Structure
- Data Instances
- Data Fields
## Dataset Description
- Homepage: POPP
- Source: GitHub
- Paper: Recognition and Information Extraction in Historical Handwritten Tables: Toward Understanding Early 20th Century Paris Census
- Point of Contact: TEKLIA
## Dataset Summary
The POPP dataset includes French civil census from Paris from the early 20th century.
### Languages
All the documents in the dataset are written in French.
## Dataset Structure
### Data Instances
### Data Fields
- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0]["image"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. dataset[0]["image"] should always be preferred over dataset["image"][0].
- 'text': the label transcription of the image. | [
"# POPP Dataset",
"## Table of Contents\n- POPP Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: POPP\n- Source: GitHub\n- Paper: Recognition and Information Extraction in Historical Handwritten Tables: Toward Understanding Early 20th Century Paris Census\n- Point of Contact: TEKLIA",
"## Dataset Summary\n\nThe POPP dataset includes French civil census from Paris from the early 20th century.",
"### Languages\n\nAll the documents in the dataset are written in French.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] | [
"TAGS\n#task_categories-image-to-text #language-French #license-mit #region-us \n",
"# POPP Dataset",
"## Table of Contents\n- POPP Dataset\n - Table of Contents\n - Dataset Description\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields",
"## Dataset Description\n\n- Homepage: POPP\n- Source: GitHub\n- Paper: Recognition and Information Extraction in Historical Handwritten Tables: Toward Understanding Early 20th Century Paris Census\n- Point of Contact: TEKLIA",
"## Dataset Summary\n\nThe POPP dataset includes French civil census from Paris from the early 20th century.",
"### Languages\n\nAll the documents in the dataset are written in French.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n\n- 'image': A PIL.Image.Image object containing the image. Note that when accessing the image column: dataset[0][\"image\"] the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. dataset[0][\"image\"] should always be preferred over dataset[\"image\"][0].\n- 'text': the label transcription of the image."
] |
91d5ed75ac66b68e84e91ff0e1d8c036f3aed3e8 |
# Medmcqa mixtral openchat 0.1
This dataset is a small subset of Medmcqa where asked mixtral / openchat3.5 models to answer medical questions and give some explanations.
To ensure the results are correct, we gave some useful information in the prompt to help the model to answer correct. We discarded those useful information from the question we put in this dataset.
By doing this, we can have a structured and accurate answer built by the LLM.
This dataset can be used to finetuned LLMs on medical question answering.
Currently, the dataset is really small, this is the version 0.1. In the next versions, we will try to build a bigger version and to include other languages.
| guigux/medmcqa_mixtral_openchat_0.1 | [
"region:us"
] | 2024-01-12T12:29:00+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1386971, "num_examples": 1424}], "download_size": 714945, "dataset_size": 1386971}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T12:40:16+00:00 | [] | [] | TAGS
#region-us
|
# Medmcqa mixtral openchat 0.1
This dataset is a small subset of Medmcqa where asked mixtral / openchat3.5 models to answer medical questions and give some explanations.
To ensure the results are correct, we gave some useful information in the prompt to help the model to answer correct. We discarded those useful information from the question we put in this dataset.
By doing this, we can have a structured and accurate answer built by the LLM.
This dataset can be used to finetuned LLMs on medical question answering.
Currently, the dataset is really small, this is the version 0.1. In the next versions, we will try to build a bigger version and to include other languages.
| [
"# Medmcqa mixtral openchat 0.1\n\nThis dataset is a small subset of Medmcqa where asked mixtral / openchat3.5 models to answer medical questions and give some explanations.\nTo ensure the results are correct, we gave some useful information in the prompt to help the model to answer correct. We discarded those useful information from the question we put in this dataset.\nBy doing this, we can have a structured and accurate answer built by the LLM.\n\nThis dataset can be used to finetuned LLMs on medical question answering.\n\nCurrently, the dataset is really small, this is the version 0.1. In the next versions, we will try to build a bigger version and to include other languages."
] | [
"TAGS\n#region-us \n",
"# Medmcqa mixtral openchat 0.1\n\nThis dataset is a small subset of Medmcqa where asked mixtral / openchat3.5 models to answer medical questions and give some explanations.\nTo ensure the results are correct, we gave some useful information in the prompt to help the model to answer correct. We discarded those useful information from the question we put in this dataset.\nBy doing this, we can have a structured and accurate answer built by the LLM.\n\nThis dataset can be used to finetuned LLMs on medical question answering.\n\nCurrently, the dataset is really small, this is the version 0.1. In the next versions, we will try to build a bigger version and to include other languages."
] |
e31fd56b8701a90965e8d91fc42dd2eb2cfc63e0 | - [./file.txt](./file.txt) # Works
- [./subdir/file.txt](./subdir/file.txt) # Works
- [./subdir](./subdir) # Does not work
| juhoinkinen/github-vs-hfhub | [
"region:us"
] | 2024-01-12T12:29:01+00:00 | {} | 2024-01-12T12:38:28+00:00 | [] | [] | TAGS
#region-us
| - ./URL # Works
- ./subdir/URL # Works
- ./subdir # Does not work
| [
"# Works\n- ./subdir/URL # Works\n- ./subdir # Does not work"
] | [
"TAGS\n#region-us \n",
"# Works\n- ./subdir/URL # Works\n- ./subdir # Does not work"
] |
4b74d2aebfc9e0bb9609f0f85a111190764fb326 | # Dataset Card for "dpoOlier"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Jaredquek/DpoOlier | [
"region:us"
] | 2024-01-12T12:57:40+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 291726, "num_examples": 315}], "download_size": 123621, "dataset_size": 291726}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T12:58:49+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dpoOlier"
More Information needed | [
"# Dataset Card for \"dpoOlier\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dpoOlier\"\n\nMore Information needed"
] |
1559522fa44447c8e10fd8dd22ea78425cc591df | # Dataset Card for "fashion_image_caption_100-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Monsef/fashion_image_caption_100-v2 | [
"region:us"
] | 2024-01-12T13:09:07+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22820471.0, "num_examples": 100}], "download_size": 22820373, "dataset_size": 22820471.0}} | 2024-01-12T13:09:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "fashion_image_caption_100-v2"
More Information needed | [
"# Dataset Card for \"fashion_image_caption_100-v2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"fashion_image_caption_100-v2\"\n\nMore Information needed"
] |
2479a0244ad827da4bb8c1809cd8e74acb68fd67 | # 2020 Santa Clara LiDaR and Satellite Image Dataset
## Overview
This dataset is collection of LiDaR and Satellite images of Santa Clara, California. This dataset is specifically designed for advanced depth estimation using 2D satellite images. The dataset is collected from [NOAA](https://noaa-nos-coastal-lidar-pds.s3.amazonaws.com/laz/geoid18/9329/index.html) and [Google Earth](https://www.google.com/earth/).
## Dataset
### Total Dataset Size
- **Number of Objects**: 6,046
- **Size on Disk**: 733.1 GB
### Specified Objects in the Dataset
1. **Edited2020SantaClaraLiDarDataset.geojson**
- **Type**: GeoJSON
- **Size**: 895.2 KB
- **Total Objects**: 1
2. **images_v2/** (Folder)
- **Comment**: RAW Satellite Images
- **Total Size**: 588.0 GB
- **Number of Objects**: 2015
3. **images/** (Folder)
- **Comment**: Compressed Satellite Images
- **Total Size**: 5.3 GB
- **Number of Objects**: 2015
4. **LAZ/** (Folder)
- **Comment**: LiDaR LAZ Files
- **Total Size**: 139.7 GB
- **Number of Objects**: 2015
### Additional Information
#### Creators;
- [Umut YILDIRIM](https://umutyildirim.com)
- [Shreyas Joshi](https://www.linkedin.com/in/shreyas-joshi31/)
#### License:
MIT License
_Note: The dataset is particularly useful for deep learning applications in lidar data processing and analysis, offering a rich source of geospatial data._
_Note: This repo is under MIT License. However, if you intend to use the Google Earth imagery for commercial purposes, you will need to obtain permission through this [Google form](https://docs.google.com/forms/d/e/1FAIpQLSfOpvrAs0xBg1p33X3cd-Q0et-PWX84eGb1-J1uBwqd2TY2Lw/viewform)._ | umuthopeyildirim/2020SantaClaraLiDaRandSatelliteImage | [
"task_categories:depth-estimation",
"size_categories:1K<n<10K",
"license:mit",
"lidar",
"satellite",
"image",
"region:us"
] | 2024-01-12T13:16:46+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["depth-estimation"], "pretty_name": "2020 Santa Clara LiDaR and Satellite Image Dataset", "tags": ["lidar", "satellite", "image"]} | 2024-01-22T17:01:26+00:00 | [] | [] | TAGS
#task_categories-depth-estimation #size_categories-1K<n<10K #license-mit #lidar #satellite #image #region-us
| # 2020 Santa Clara LiDaR and Satellite Image Dataset
## Overview
This dataset is collection of LiDaR and Satellite images of Santa Clara, California. This dataset is specifically designed for advanced depth estimation using 2D satellite images. The dataset is collected from NOAA and Google Earth.
## Dataset
### Total Dataset Size
- Number of Objects: 6,046
- Size on Disk: 733.1 GB
### Specified Objects in the Dataset
1. Edited2020SantaClaraLiDarDataset.geojson
- Type: GeoJSON
- Size: 895.2 KB
- Total Objects: 1
2. images_v2/ (Folder)
- Comment: RAW Satellite Images
- Total Size: 588.0 GB
- Number of Objects: 2015
3. images/ (Folder)
- Comment: Compressed Satellite Images
- Total Size: 5.3 GB
- Number of Objects: 2015
4. LAZ/ (Folder)
- Comment: LiDaR LAZ Files
- Total Size: 139.7 GB
- Number of Objects: 2015
### Additional Information
#### Creators;
- Umut YILDIRIM
- Shreyas Joshi
#### License:
MIT License
_Note: The dataset is particularly useful for deep learning applications in lidar data processing and analysis, offering a rich source of geospatial data._
_Note: This repo is under MIT License. However, if you intend to use the Google Earth imagery for commercial purposes, you will need to obtain permission through this Google form._ | [
"# 2020 Santa Clara LiDaR and Satellite Image Dataset",
"## Overview\n\nThis dataset is collection of LiDaR and Satellite images of Santa Clara, California. This dataset is specifically designed for advanced depth estimation using 2D satellite images. The dataset is collected from NOAA and Google Earth.",
"## Dataset",
"### Total Dataset Size\n\n- Number of Objects: 6,046\n- Size on Disk: 733.1 GB",
"### Specified Objects in the Dataset\n\n1. Edited2020SantaClaraLiDarDataset.geojson\n - Type: GeoJSON\n - Size: 895.2 KB\n - Total Objects: 1\n\n2. images_v2/ (Folder)\n - Comment: RAW Satellite Images\n - Total Size: 588.0 GB\n - Number of Objects: 2015\n\n3. images/ (Folder)\n - Comment: Compressed Satellite Images\n - Total Size: 5.3 GB\n - Number of Objects: 2015\n\n4. LAZ/ (Folder)\n - Comment: LiDaR LAZ Files\n - Total Size: 139.7 GB\n - Number of Objects: 2015",
"### Additional Information",
"#### Creators;\n\n- Umut YILDIRIM\n- Shreyas Joshi",
"#### License:\n\nMIT License \n\n_Note: The dataset is particularly useful for deep learning applications in lidar data processing and analysis, offering a rich source of geospatial data._\n\n_Note: This repo is under MIT License. However, if you intend to use the Google Earth imagery for commercial purposes, you will need to obtain permission through this Google form._"
] | [
"TAGS\n#task_categories-depth-estimation #size_categories-1K<n<10K #license-mit #lidar #satellite #image #region-us \n",
"# 2020 Santa Clara LiDaR and Satellite Image Dataset",
"## Overview\n\nThis dataset is collection of LiDaR and Satellite images of Santa Clara, California. This dataset is specifically designed for advanced depth estimation using 2D satellite images. The dataset is collected from NOAA and Google Earth.",
"## Dataset",
"### Total Dataset Size\n\n- Number of Objects: 6,046\n- Size on Disk: 733.1 GB",
"### Specified Objects in the Dataset\n\n1. Edited2020SantaClaraLiDarDataset.geojson\n - Type: GeoJSON\n - Size: 895.2 KB\n - Total Objects: 1\n\n2. images_v2/ (Folder)\n - Comment: RAW Satellite Images\n - Total Size: 588.0 GB\n - Number of Objects: 2015\n\n3. images/ (Folder)\n - Comment: Compressed Satellite Images\n - Total Size: 5.3 GB\n - Number of Objects: 2015\n\n4. LAZ/ (Folder)\n - Comment: LiDaR LAZ Files\n - Total Size: 139.7 GB\n - Number of Objects: 2015",
"### Additional Information",
"#### Creators;\n\n- Umut YILDIRIM\n- Shreyas Joshi",
"#### License:\n\nMIT License \n\n_Note: The dataset is particularly useful for deep learning applications in lidar data processing and analysis, offering a rich source of geospatial data._\n\n_Note: This repo is under MIT License. However, if you intend to use the Google Earth imagery for commercial purposes, you will need to obtain permission through this Google form._"
] |
47159fba686d6ecf3b180262c90ce926bcc69525 |
# Dataset Card for escagleu-64K corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
escagleu-64k is a parallel corpus comprising 64184 sentences translated among Spanish, Catalan, Valencian Catalan, Galician, and Basque.
The original sentences are in Spanish and come from the [Spanish Common Voice Corpus](https://github.com/common-voice/common-voice/tree/main/server/data/es).
We prepared this corpus with the aim of creating a parallel speech dataset among these languages using the [Common Voice](https://commonvoice.mozilla.org) platform between the frame of the project [Ilenia](https://proyectoilenia.es/).
*Warning: The translation into Galician and Basque was done through machine translation, and post-editing is required. Additionally, we have identified some errors in the source sentences. A subsequent version of the corpus will include reviewed translations and source sentences.*
This work is licensed under a [CC0 International License](https://creativecommons.org/publicdomain/zero/1.0/).
### Supported Tasks and Leaderboards
This dataset can be used for training Machine Translation (MT) models, Speech-to-Text translation models, and Speech-to-Speech translation models.
### Languages
This multilingual dataset is in Spanish (es), Catalan (ca), Valencian Catalan (ca-va), Galician (gl) and Basque (eu).
## Dataset Structure
Five separated tsv files are provided with the sentences sorted in the same order:
* escagleu-64k_ca.tsv: contains 64184 sentences translated into Catalan.
* escagleu-64k_ca_va.tsv: contains 64184 sentences adapted into Valencian Catalan.
* escagleu-64k_es.tsv: contains 64184 sentences in Spanish (authentic).
* escagleu-64k_gl.tsv: contains 64184 sentences translated into Galician using MT.
* escagleu-64k_es.tsv: contains 64184 sentences translated into Basque using MT.
In addition, a tsv is provided with the sentences aligned in all the languages:
* escagleu-6k.tsv: the sentences in all the languages of the corpus.
### Data Splits
The dataset contains a single split.
## Dataset Creation
### Curation Rationale
We created this corpus with the goal of establishing a parallel speech dataset among Spanish, Catalan, Galician, and Basque using the Common Voice platform. The resulting dataset will be employed to train Speech-to-Text and Speech-to-Speech translation models.
### Source Data
The original sentences are in Spanish and come from the [Spanish Common Voice Corpus](https://github.com/common-voice/common-voice/tree/main/server/data/es).
#### Initial Data Collection and Normalization
We extracted a set of 223,261 sentences from the [Spanish Common Voice Corpus](https://github.com/common-voice/common-voice/tree/main/server/data/es) v.07.
An automatic selection was performed, removing sentences that:
* Were repeated.
* Exceeded 14 words in length.
* Ended with a preposition.
* Were in lowercase but Out of Vocabulary (OOV), utilizing the Hunspell dictionary.
* Contained repeated words.
* Featured characters not existing in Spanish or sequences of characters impossible in Spanish.
This filtering process resulted in the current selection of sentences. For some of these sentences, we conducted manual or automatic adjustments, such as capitalization, punctuation fixes, etc.
The selected sentences were translated from Spanish into Catalan by a professional translation company. Subsequently, they were adapted into Valencian by an expert team from the Universitat d'Alacant.
The same corpus of sentences was translated into Galician using [NOS-MT-OpenNMT-es-gl](https://huggingface.co/proxectonos/Nos_MT-OpenNMT-es-gl) and into Basque using [itzuli](https://www.euskadi.eus/itzuli/).
A subsequent version of the corpus will include reviewed translations and source sentences.
#### Who are the source language producers?
The [Common Voice](https://commonvoice.mozilla.org) is a corpus designed for speech-related tasks. The recorded sentences are sourced from diverse origins. For more information, please refer to the [project repository](https://github.com/common-voice).
### Annotations
This corpus doesn't have annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The original sentences are in Spanish and come from the [Spanish Common Voice Corpus](https://github.com/common-voice/common-voice/tree/main/server/data/es). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
We expect that this corpus will contribute to the development of speech technologies in the targeted languages.
### Discussion of Biases
We are aware that some of the sentences in the corpus may convey stereotypes. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The translation into Galician and Basque was done through machine translation, and post-editing is required. Additionally, we have identified some errors in the source sentences. A subsequent version of the corpus will include reviewed translations and source sentences.
## Additional Information
### Authors
Grupo de Procesamiento del Lenguaje Natural y Sistemas de Información, Centro de Inteligencia Digital (CENID, Universidad de Alicante),
[Hizkuntza eta Politika Saila (Eusko Jaurlaritza)](https://www.euskadi.eus/itzuli),
HiTZ Center - Aholab, University of the Basque Country UPV/EHU,
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center (BSC),
Proxecto Nós (Universidade de Santiago de Compostela).
### Contact information
For further information, please send an email to [email protected].
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [CC](https://creativecommons.org/publicdomain/zero/1.0/).
### Funding
This work is funded by the Ministerio de Asuntos Económicos y Transformación Digital and Plan de Recuperación, Transformación y Resiliencia - Funded by EU – NextGenerationEU within the framework of the [project ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337.
| projecte-aina/escagleu-64k | [
"task_categories:translation",
"task_categories:audio-to-audio",
"task_categories:automatic-speech-recognition",
"annotations_creators:found",
"annotations_creators:expert-generated",
"annotations_creators:machine-generated",
"language_creators:crowdsourced",
"multilinguality:translation",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:es",
"language:ca",
"language:gl",
"language:eu",
"license:cc0-1.0",
"region:us"
] | 2024-01-12T13:23:58+00:00 | {"annotations_creators": ["found", "expert-generated", "machine-generated"], "language_creators": ["crowdsourced"], "language": ["es", "ca", "gl", "eu"], "license": "cc0-1.0", "multilinguality": ["translation"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["translation", "audio-to-audio", "automatic-speech-recognition"], "task_ids": [], "pretty_name": "escagleu-64K"} | 2024-01-31T16:41:47+00:00 | [] | [
"es",
"ca",
"gl",
"eu"
] | TAGS
#task_categories-translation #task_categories-audio-to-audio #task_categories-automatic-speech-recognition #annotations_creators-found #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-Spanish #language-Catalan #language-Galician #language-Basque #license-cc0-1.0 #region-us
|
# Dataset Card for escagleu-64K corpus
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Splits
- Dataset Creation
- Source Data
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Author
- Contact Information
- Copyright
- Licensing information
- Funding
## Dataset Description
### Dataset Summary
escagleu-64k is a parallel corpus comprising 64184 sentences translated among Spanish, Catalan, Valencian Catalan, Galician, and Basque.
The original sentences are in Spanish and come from the Spanish Common Voice Corpus.
We prepared this corpus with the aim of creating a parallel speech dataset among these languages using the Common Voice platform between the frame of the project Ilenia.
*Warning: The translation into Galician and Basque was done through machine translation, and post-editing is required. Additionally, we have identified some errors in the source sentences. A subsequent version of the corpus will include reviewed translations and source sentences.*
This work is licensed under a CC0 International License.
### Supported Tasks and Leaderboards
This dataset can be used for training Machine Translation (MT) models, Speech-to-Text translation models, and Speech-to-Speech translation models.
### Languages
This multilingual dataset is in Spanish (es), Catalan (ca), Valencian Catalan (ca-va), Galician (gl) and Basque (eu).
## Dataset Structure
Five separated tsv files are provided with the sentences sorted in the same order:
* escagleu-64k_ca.tsv: contains 64184 sentences translated into Catalan.
* escagleu-64k_ca_va.tsv: contains 64184 sentences adapted into Valencian Catalan.
* escagleu-64k_es.tsv: contains 64184 sentences in Spanish (authentic).
* escagleu-64k_gl.tsv: contains 64184 sentences translated into Galician using MT.
* escagleu-64k_es.tsv: contains 64184 sentences translated into Basque using MT.
In addition, a tsv is provided with the sentences aligned in all the languages:
* URL: the sentences in all the languages of the corpus.
### Data Splits
The dataset contains a single split.
## Dataset Creation
### Curation Rationale
We created this corpus with the goal of establishing a parallel speech dataset among Spanish, Catalan, Galician, and Basque using the Common Voice platform. The resulting dataset will be employed to train Speech-to-Text and Speech-to-Speech translation models.
### Source Data
The original sentences are in Spanish and come from the Spanish Common Voice Corpus.
#### Initial Data Collection and Normalization
We extracted a set of 223,261 sentences from the Spanish Common Voice Corpus v.07.
An automatic selection was performed, removing sentences that:
* Were repeated.
* Exceeded 14 words in length.
* Ended with a preposition.
* Were in lowercase but Out of Vocabulary (OOV), utilizing the Hunspell dictionary.
* Contained repeated words.
* Featured characters not existing in Spanish or sequences of characters impossible in Spanish.
This filtering process resulted in the current selection of sentences. For some of these sentences, we conducted manual or automatic adjustments, such as capitalization, punctuation fixes, etc.
The selected sentences were translated from Spanish into Catalan by a professional translation company. Subsequently, they were adapted into Valencian by an expert team from the Universitat d'Alacant.
The same corpus of sentences was translated into Galician using NOS-MT-OpenNMT-es-gl and into Basque using itzuli.
A subsequent version of the corpus will include reviewed translations and source sentences.
#### Who are the source language producers?
The Common Voice is a corpus designed for speech-related tasks. The recorded sentences are sourced from diverse origins. For more information, please refer to the project repository.
### Annotations
This corpus doesn't have annotations.
#### Annotation process
N/A
#### Who are the annotators?
N/A
### Personal and Sensitive Information
The original sentences are in Spanish and come from the Spanish Common Voice Corpus. To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
We expect that this corpus will contribute to the development of speech technologies in the targeted languages.
### Discussion of Biases
We are aware that some of the sentences in the corpus may convey stereotypes. Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The translation into Galician and Basque was done through machine translation, and post-editing is required. Additionally, we have identified some errors in the source sentences. A subsequent version of the corpus will include reviewed translations and source sentences.
## Additional Information
### Authors
Grupo de Procesamiento del Lenguaje Natural y Sistemas de Información, Centro de Inteligencia Digital (CENID, Universidad de Alicante),
Hizkuntza eta Politika Saila (Eusko Jaurlaritza),
HiTZ Center - Aholab, University of the Basque Country UPV/EHU,
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center (BSC),
Proxecto Nós (Universidade de Santiago de Compostela).
### Contact information
For further information, please send an email to langtech@URL.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a CC.
### Funding
This work is funded by the Ministerio de Asuntos Económicos y Transformación Digital and Plan de Recuperación, Transformación y Resiliencia - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337.
| [
"# Dataset Card for escagleu-64K corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Author\n - Contact Information\n - Copyright\n - Licensing information\n - Funding",
"## Dataset Description",
"### Dataset Summary\n\nescagleu-64k is a parallel corpus comprising 64184 sentences translated among Spanish, Catalan, Valencian Catalan, Galician, and Basque.\n\nThe original sentences are in Spanish and come from the Spanish Common Voice Corpus.\n\nWe prepared this corpus with the aim of creating a parallel speech dataset among these languages using the Common Voice platform between the frame of the project Ilenia.\n\n*Warning: The translation into Galician and Basque was done through machine translation, and post-editing is required. Additionally, we have identified some errors in the source sentences. A subsequent version of the corpus will include reviewed translations and source sentences.*\n\nThis work is licensed under a CC0 International License.",
"### Supported Tasks and Leaderboards\n\nThis dataset can be used for training Machine Translation (MT) models, Speech-to-Text translation models, and Speech-to-Speech translation models.",
"### Languages\n\nThis multilingual dataset is in Spanish (es), Catalan (ca), Valencian Catalan (ca-va), Galician (gl) and Basque (eu).",
"## Dataset Structure\nFive separated tsv files are provided with the sentences sorted in the same order:\n\n* escagleu-64k_ca.tsv: contains 64184 sentences translated into Catalan.\n* escagleu-64k_ca_va.tsv: contains 64184 sentences adapted into Valencian Catalan.\n* escagleu-64k_es.tsv: contains 64184 sentences in Spanish (authentic).\n* escagleu-64k_gl.tsv: contains 64184 sentences translated into Galician using MT. \n* escagleu-64k_es.tsv: contains 64184 sentences translated into Basque using MT.\n \nIn addition, a tsv is provided with the sentences aligned in all the languages:\n\n* URL: the sentences in all the languages of the corpus.",
"### Data Splits\n\nThe dataset contains a single split.",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this corpus with the goal of establishing a parallel speech dataset among Spanish, Catalan, Galician, and Basque using the Common Voice platform. The resulting dataset will be employed to train Speech-to-Text and Speech-to-Speech translation models.",
"### Source Data\n\nThe original sentences are in Spanish and come from the Spanish Common Voice Corpus.",
"#### Initial Data Collection and Normalization\n\nWe extracted a set of 223,261 sentences from the Spanish Common Voice Corpus v.07.\n\nAn automatic selection was performed, removing sentences that:\n\n* Were repeated.\n* Exceeded 14 words in length.\n* Ended with a preposition.\n* Were in lowercase but Out of Vocabulary (OOV), utilizing the Hunspell dictionary.\n* Contained repeated words.\n* Featured characters not existing in Spanish or sequences of characters impossible in Spanish.\n\nThis filtering process resulted in the current selection of sentences. For some of these sentences, we conducted manual or automatic adjustments, such as capitalization, punctuation fixes, etc.\n\nThe selected sentences were translated from Spanish into Catalan by a professional translation company. Subsequently, they were adapted into Valencian by an expert team from the Universitat d'Alacant.\n\nThe same corpus of sentences was translated into Galician using NOS-MT-OpenNMT-es-gl and into Basque using itzuli.\n\nA subsequent version of the corpus will include reviewed translations and source sentences.",
"#### Who are the source language producers?\n\nThe Common Voice is a corpus designed for speech-related tasks. The recorded sentences are sourced from diverse origins. For more information, please refer to the project repository.",
"### Annotations\n\nThis corpus doesn't have annotations.",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nThe original sentences are in Spanish and come from the Spanish Common Voice Corpus. To our knowledge, there are no private person’s personal identifiers or sensitive information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe expect that this corpus will contribute to the development of speech technologies in the targeted languages.",
"### Discussion of Biases\n\nWe are aware that some of the sentences in the corpus may convey stereotypes. Nonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\nThe translation into Galician and Basque was done through machine translation, and post-editing is required. Additionally, we have identified some errors in the source sentences. A subsequent version of the corpus will include reviewed translations and source sentences.",
"## Additional Information",
"### Authors\nGrupo de Procesamiento del Lenguaje Natural y Sistemas de Información, Centro de Inteligencia Digital (CENID, Universidad de Alicante),\n\nHizkuntza eta Politika Saila (Eusko Jaurlaritza),\n\nHiTZ Center - Aholab, University of the Basque Country UPV/EHU, \n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center (BSC),\n\nProxecto Nós (Universidade de Santiago de Compostela).",
"### Contact information\nFor further information, please send an email to langtech@URL.",
"### Copyright\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\nThis work is licensed under a CC.",
"### Funding\n\nThis work is funded by the Ministerio de Asuntos Económicos y Transformación Digital and Plan de Recuperación, Transformación y Resiliencia - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337."
] | [
"TAGS\n#task_categories-translation #task_categories-audio-to-audio #task_categories-automatic-speech-recognition #annotations_creators-found #annotations_creators-expert-generated #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-translation #size_categories-10K<n<100K #source_datasets-original #language-Spanish #language-Catalan #language-Galician #language-Basque #license-cc0-1.0 #region-us \n",
"# Dataset Card for escagleu-64K corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Author\n - Contact Information\n - Copyright\n - Licensing information\n - Funding",
"## Dataset Description",
"### Dataset Summary\n\nescagleu-64k is a parallel corpus comprising 64184 sentences translated among Spanish, Catalan, Valencian Catalan, Galician, and Basque.\n\nThe original sentences are in Spanish and come from the Spanish Common Voice Corpus.\n\nWe prepared this corpus with the aim of creating a parallel speech dataset among these languages using the Common Voice platform between the frame of the project Ilenia.\n\n*Warning: The translation into Galician and Basque was done through machine translation, and post-editing is required. Additionally, we have identified some errors in the source sentences. A subsequent version of the corpus will include reviewed translations and source sentences.*\n\nThis work is licensed under a CC0 International License.",
"### Supported Tasks and Leaderboards\n\nThis dataset can be used for training Machine Translation (MT) models, Speech-to-Text translation models, and Speech-to-Speech translation models.",
"### Languages\n\nThis multilingual dataset is in Spanish (es), Catalan (ca), Valencian Catalan (ca-va), Galician (gl) and Basque (eu).",
"## Dataset Structure\nFive separated tsv files are provided with the sentences sorted in the same order:\n\n* escagleu-64k_ca.tsv: contains 64184 sentences translated into Catalan.\n* escagleu-64k_ca_va.tsv: contains 64184 sentences adapted into Valencian Catalan.\n* escagleu-64k_es.tsv: contains 64184 sentences in Spanish (authentic).\n* escagleu-64k_gl.tsv: contains 64184 sentences translated into Galician using MT. \n* escagleu-64k_es.tsv: contains 64184 sentences translated into Basque using MT.\n \nIn addition, a tsv is provided with the sentences aligned in all the languages:\n\n* URL: the sentences in all the languages of the corpus.",
"### Data Splits\n\nThe dataset contains a single split.",
"## Dataset Creation",
"### Curation Rationale\n\nWe created this corpus with the goal of establishing a parallel speech dataset among Spanish, Catalan, Galician, and Basque using the Common Voice platform. The resulting dataset will be employed to train Speech-to-Text and Speech-to-Speech translation models.",
"### Source Data\n\nThe original sentences are in Spanish and come from the Spanish Common Voice Corpus.",
"#### Initial Data Collection and Normalization\n\nWe extracted a set of 223,261 sentences from the Spanish Common Voice Corpus v.07.\n\nAn automatic selection was performed, removing sentences that:\n\n* Were repeated.\n* Exceeded 14 words in length.\n* Ended with a preposition.\n* Were in lowercase but Out of Vocabulary (OOV), utilizing the Hunspell dictionary.\n* Contained repeated words.\n* Featured characters not existing in Spanish or sequences of characters impossible in Spanish.\n\nThis filtering process resulted in the current selection of sentences. For some of these sentences, we conducted manual or automatic adjustments, such as capitalization, punctuation fixes, etc.\n\nThe selected sentences were translated from Spanish into Catalan by a professional translation company. Subsequently, they were adapted into Valencian by an expert team from the Universitat d'Alacant.\n\nThe same corpus of sentences was translated into Galician using NOS-MT-OpenNMT-es-gl and into Basque using itzuli.\n\nA subsequent version of the corpus will include reviewed translations and source sentences.",
"#### Who are the source language producers?\n\nThe Common Voice is a corpus designed for speech-related tasks. The recorded sentences are sourced from diverse origins. For more information, please refer to the project repository.",
"### Annotations\n\nThis corpus doesn't have annotations.",
"#### Annotation process\n\nN/A",
"#### Who are the annotators?\n\nN/A",
"### Personal and Sensitive Information\n\nThe original sentences are in Spanish and come from the Spanish Common Voice Corpus. To our knowledge, there are no private person’s personal identifiers or sensitive information.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe expect that this corpus will contribute to the development of speech technologies in the targeted languages.",
"### Discussion of Biases\n\nWe are aware that some of the sentences in the corpus may convey stereotypes. Nonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\nThe translation into Galician and Basque was done through machine translation, and post-editing is required. Additionally, we have identified some errors in the source sentences. A subsequent version of the corpus will include reviewed translations and source sentences.",
"## Additional Information",
"### Authors\nGrupo de Procesamiento del Lenguaje Natural y Sistemas de Información, Centro de Inteligencia Digital (CENID, Universidad de Alicante),\n\nHizkuntza eta Politika Saila (Eusko Jaurlaritza),\n\nHiTZ Center - Aholab, University of the Basque Country UPV/EHU, \n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center (BSC),\n\nProxecto Nós (Universidade de Santiago de Compostela).",
"### Contact information\nFor further information, please send an email to langtech@URL.",
"### Copyright\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\nThis work is licensed under a CC.",
"### Funding\n\nThis work is funded by the Ministerio de Asuntos Económicos y Transformación Digital and Plan de Recuperación, Transformación y Resiliencia - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337."
] |
6c3a529ea14ac5f94afc2e0fb824fe66299b8f5e | Manually cleaned version of Warhammer 40k novels that contains rows of text that are 10000 characters long, with the last 500 characters being an overlap with the first 500 characters in the next row. Does not include contents page (and dramatis personae) or afterword. | 512duncanl/wh40k_novels | [
"task_categories:text-generation",
"language:en",
"region:us"
] | 2024-01-12T13:38:59+00:00 | {"language": ["en"], "task_categories": ["text-generation"]} | 2024-02-15T03:46:11+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #language-English #region-us
| Manually cleaned version of Warhammer 40k novels that contains rows of text that are 10000 characters long, with the last 500 characters being an overlap with the first 500 characters in the next row. Does not include contents page (and dramatis personae) or afterword. | [] | [
"TAGS\n#task_categories-text-generation #language-English #region-us \n"
] |
a2fbcaf3eb0b1775cbdf553f9edc2842bad8b129 |
# Dataset Card for Basque Parliament Speech Corpus 1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
## Dataset Description
- **Repository:** https://huggingface.co/datasets/gttsehu/basque_parliament_1
- **Paper:** https://arxiv.org/
- **Contact:** [Luis J. Rodriguez-Fuentes](mailto:[email protected])
### Dataset Summary
The Basque Parliament Speech Corpus 1.0 consists of 1462 hours of speech extracted from
Basque Parliament plenary sessions from 2013 to 2022. Encoded as MP3 files, the dataset
contains 759192 transcribed segments either spoken in Basque, Spanish or both (in
Basque and Spanish).
The corpus was created to help the development of speech technology for the Basque
language, which is relatively low-resourced. However, the dataset is suited to the
development of bilingual ASR systems, meaning to decode speech signals in Basque and/or
Spanish. Given the similarity between Basque and Spanish at the phonetic/phonological
level, acoustic models can be shared by both languages, which comes to circumvent
the lack of training data for Basque.
The dataset contains of four splits: `train`, `train_clean`, `dev` and `test`, all of
them containing 3-10 second long speech segments and their corresponding transcriptions.
Besides the transcription, each segment includes a speaker identifier and a language tag
(Spanish, Basque or bilingual).
The `train` split, aimed at estimating acoustic models, was extracted from 2013-2021
sessions, amounting to 1445 hours of speech. The `train_clean` split is a subset of
the `train` split, containing only highly reliable transcriptions. The `dev` and `test`
splits, amounting to 7.6 and 9.6 hours of speech respectively, were extracted from
February 2022 sessions and their transcripts were manually audited.
### Languages
The dataset contains segments either spoken in Basque (`eu`), Spanish (`es`) or both (`bi`).
The language distribution is strongly biased towards Spanish and bilingual segments are
very unfrequent.
Duration (in hours) disaggregated per language:
| **Split** | **es** | **eu** | **bi** | **Total** |
|------------:|-------:|-------:|-------:|----------:|
| train | 1018.6 | 409.5 | 17.0 | 1445.1 |
| train_clean | 937.7 | 363.6 | 14.2 | 1315.5 |
| dev | 4.7 | 2.6 | 0.3 | 7.6 |
| test | 6.4 | 2.8 | 0.4 | 9.6 |
Number of segments disaggregated per language:
| **Split** | **es** | **eu** | **bi** | **Total** |
|------------:|-------:|-------:|-------:|----------:|
| train | 524942 | 216201 | 8802 | 749945 |
| train_clean | 469937 | 184950 | 6984 | 661871 |
| dev | 2567 | 1397 | 131 | 4095 |
| test | 3450 | 1521 | 181 | 5152 |
The dataset contains four configs that can be used to select the full set of multilingual
segments or just a subset of them, constrained to a single language:
* `all` : all the segments
* `es` : only the Spanish segments
* `eu` : only the Basque segments
* `bi` : only the bilingual segments
## How to use
You can use the `datasets` library to load the dataset from Python. The dataset can be
downloaded in one call to your local drive by using the `load_dataset` function. For
example, to download the Basque config of the `train` split, simply specify the
desired language config name (i.e., "eu" for Basque) and the split:
```python
from datasets import load_dataset
ds = load_dataset("gttsehu/basque_parliament_1", "eu", split="train")
```
The default config is `all` and if no split is indicated all splits are prepared, so
the next code prepares the full dataset:
```python
from datasets import load_dataset
ds = load_dataset("gttsehu/basque_parliament_1")
```
| gttsehu/basque_parliament_1 | [
"task_categories:automatic-speech-recognition",
"language:es",
"language:eu",
"license:cc0-1.0",
"region:us"
] | 2024-01-12T13:42:45+00:00 | {"language": ["es", "eu"], "license": "cc0-1.0", "task_categories": ["automatic-speech-recognition"], "pretty_name": "Basque Parliament Speech Corpus 1.0"} | 2024-01-15T17:39:25+00:00 | [] | [
"es",
"eu"
] | TAGS
#task_categories-automatic-speech-recognition #language-Spanish #language-Basque #license-cc0-1.0 #region-us
| Dataset Card for Basque Parliament Speech Corpus 1.0
====================================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
Dataset Description
-------------------
* Repository: URL
* Paper: URL
* Contact: Luis J. Rodriguez-Fuentes
### Dataset Summary
The Basque Parliament Speech Corpus 1.0 consists of 1462 hours of speech extracted from
Basque Parliament plenary sessions from 2013 to 2022. Encoded as MP3 files, the dataset
contains 759192 transcribed segments either spoken in Basque, Spanish or both (in
Basque and Spanish).
The corpus was created to help the development of speech technology for the Basque
language, which is relatively low-resourced. However, the dataset is suited to the
development of bilingual ASR systems, meaning to decode speech signals in Basque and/or
Spanish. Given the similarity between Basque and Spanish at the phonetic/phonological
level, acoustic models can be shared by both languages, which comes to circumvent
the lack of training data for Basque.
The dataset contains of four splits: 'train', 'train\_clean', 'dev' and 'test', all of
them containing 3-10 second long speech segments and their corresponding transcriptions.
Besides the transcription, each segment includes a speaker identifier and a language tag
(Spanish, Basque or bilingual).
The 'train' split, aimed at estimating acoustic models, was extracted from 2013-2021
sessions, amounting to 1445 hours of speech. The 'train\_clean' split is a subset of
the 'train' split, containing only highly reliable transcriptions. The 'dev' and 'test'
splits, amounting to 7.6 and 9.6 hours of speech respectively, were extracted from
February 2022 sessions and their transcripts were manually audited.
### Languages
The dataset contains segments either spoken in Basque ('eu'), Spanish ('es') or both ('bi').
The language distribution is strongly biased towards Spanish and bilingual segments are
very unfrequent.
Duration (in hours) disaggregated per language:
Number of segments disaggregated per language:
The dataset contains four configs that can be used to select the full set of multilingual
segments or just a subset of them, constrained to a single language:
* 'all' : all the segments
* 'es' : only the Spanish segments
* 'eu' : only the Basque segments
* 'bi' : only the bilingual segments
How to use
----------
You can use the 'datasets' library to load the dataset from Python. The dataset can be
downloaded in one call to your local drive by using the 'load\_dataset' function. For
example, to download the Basque config of the 'train' split, simply specify the
desired language config name (i.e., "eu" for Basque) and the split:
The default config is 'all' and if no split is indicated all splits are prepared, so
the next code prepares the full dataset:
| [
"### Dataset Summary\n\n\nThe Basque Parliament Speech Corpus 1.0 consists of 1462 hours of speech extracted from\nBasque Parliament plenary sessions from 2013 to 2022. Encoded as MP3 files, the dataset\ncontains 759192 transcribed segments either spoken in Basque, Spanish or both (in\nBasque and Spanish).\n\n\nThe corpus was created to help the development of speech technology for the Basque\nlanguage, which is relatively low-resourced. However, the dataset is suited to the\ndevelopment of bilingual ASR systems, meaning to decode speech signals in Basque and/or\nSpanish. Given the similarity between Basque and Spanish at the phonetic/phonological\nlevel, acoustic models can be shared by both languages, which comes to circumvent\nthe lack of training data for Basque.\n\n\nThe dataset contains of four splits: 'train', 'train\\_clean', 'dev' and 'test', all of\nthem containing 3-10 second long speech segments and their corresponding transcriptions.\nBesides the transcription, each segment includes a speaker identifier and a language tag\n(Spanish, Basque or bilingual).\n\n\nThe 'train' split, aimed at estimating acoustic models, was extracted from 2013-2021\nsessions, amounting to 1445 hours of speech. The 'train\\_clean' split is a subset of\nthe 'train' split, containing only highly reliable transcriptions. The 'dev' and 'test'\nsplits, amounting to 7.6 and 9.6 hours of speech respectively, were extracted from\nFebruary 2022 sessions and their transcripts were manually audited.",
"### Languages\n\n\nThe dataset contains segments either spoken in Basque ('eu'), Spanish ('es') or both ('bi').\nThe language distribution is strongly biased towards Spanish and bilingual segments are\nvery unfrequent.\n\n\nDuration (in hours) disaggregated per language:\n\n\n\nNumber of segments disaggregated per language:\n\n\n\nThe dataset contains four configs that can be used to select the full set of multilingual\nsegments or just a subset of them, constrained to a single language:\n\n\n* 'all' : all the segments\n* 'es' : only the Spanish segments\n* 'eu' : only the Basque segments\n* 'bi' : only the bilingual segments\n\n\nHow to use\n----------\n\n\nYou can use the 'datasets' library to load the dataset from Python. The dataset can be\ndownloaded in one call to your local drive by using the 'load\\_dataset' function. For\nexample, to download the Basque config of the 'train' split, simply specify the\ndesired language config name (i.e., \"eu\" for Basque) and the split:\n\n\nThe default config is 'all' and if no split is indicated all splits are prepared, so\nthe next code prepares the full dataset:"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #language-Spanish #language-Basque #license-cc0-1.0 #region-us \n",
"### Dataset Summary\n\n\nThe Basque Parliament Speech Corpus 1.0 consists of 1462 hours of speech extracted from\nBasque Parliament plenary sessions from 2013 to 2022. Encoded as MP3 files, the dataset\ncontains 759192 transcribed segments either spoken in Basque, Spanish or both (in\nBasque and Spanish).\n\n\nThe corpus was created to help the development of speech technology for the Basque\nlanguage, which is relatively low-resourced. However, the dataset is suited to the\ndevelopment of bilingual ASR systems, meaning to decode speech signals in Basque and/or\nSpanish. Given the similarity between Basque and Spanish at the phonetic/phonological\nlevel, acoustic models can be shared by both languages, which comes to circumvent\nthe lack of training data for Basque.\n\n\nThe dataset contains of four splits: 'train', 'train\\_clean', 'dev' and 'test', all of\nthem containing 3-10 second long speech segments and their corresponding transcriptions.\nBesides the transcription, each segment includes a speaker identifier and a language tag\n(Spanish, Basque or bilingual).\n\n\nThe 'train' split, aimed at estimating acoustic models, was extracted from 2013-2021\nsessions, amounting to 1445 hours of speech. The 'train\\_clean' split is a subset of\nthe 'train' split, containing only highly reliable transcriptions. The 'dev' and 'test'\nsplits, amounting to 7.6 and 9.6 hours of speech respectively, were extracted from\nFebruary 2022 sessions and their transcripts were manually audited.",
"### Languages\n\n\nThe dataset contains segments either spoken in Basque ('eu'), Spanish ('es') or both ('bi').\nThe language distribution is strongly biased towards Spanish and bilingual segments are\nvery unfrequent.\n\n\nDuration (in hours) disaggregated per language:\n\n\n\nNumber of segments disaggregated per language:\n\n\n\nThe dataset contains four configs that can be used to select the full set of multilingual\nsegments or just a subset of them, constrained to a single language:\n\n\n* 'all' : all the segments\n* 'es' : only the Spanish segments\n* 'eu' : only the Basque segments\n* 'bi' : only the bilingual segments\n\n\nHow to use\n----------\n\n\nYou can use the 'datasets' library to load the dataset from Python. The dataset can be\ndownloaded in one call to your local drive by using the 'load\\_dataset' function. For\nexample, to download the Basque config of the 'train' split, simply specify the\ndesired language config name (i.e., \"eu\" for Basque) and the split:\n\n\nThe default config is 'all' and if no split is indicated all splits are prepared, so\nthe next code prepares the full dataset:"
] |
35b3bb9a15bed696a997fc119c6866399da5eb61 |
# Dataset Card for news_classification_with_ents
This dataset has been created with [Argilla](https://docs.argilla.io).
As shown in the sections below, this dataset can be loaded into Argilla as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Dataset Description
- **Homepage:** https://argilla.io
- **Repository:** https://github.com/argilla-io/argilla
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named `argilla.yaml`. This configuration file will be used to configure the dataset when using the `FeedbackDataset.from_huggingface` method in Argilla.
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `FeedbackDataset.from_huggingface` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.FeedbackDataset.from_huggingface("nataliaElv/news_classification_with_ents")
```
### Load with `datasets`
To load this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("nataliaElv/news_classification_with_ents")
```
### Supported Tasks and Leaderboards
This dataset can contain [multiple fields, questions and responses](https://docs.argilla.io/en/latest/conceptual_guides/data_model.html#feedback-dataset) so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the [Dataset Structure section](#dataset-structure).
There are no leaderboards associated with this dataset.
### Languages
[More Information Needed]
## Dataset Structure
### Data in Argilla
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
The **fields** are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| text | Text | FieldTypes.text | True | False |
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| label | Label | QuestionTypes.label_selection | True | N/A | ['World', 'Sports', 'Business', 'Sci/Tech'] |
The **suggestions** are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The **metadata** is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
The **guidelines**, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the [annotation guidelines](#annotation-guidelines) section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"external_id": "record-0",
"fields": {
"text": "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street\u0027s dwindling\\band of ultra-cynics, are seeing green again."
},
"metadata": {
"entities": [
{
"char_end_index": 48,
"char_start_index": 41,
"label": "ORG",
"score": 0.855731189250946,
"span": "Reuters"
},
{
"char_end_index": 86,
"char_start_index": 75,
"label": "MEDIA",
"score": 0.46139127016067505,
"span": "Wall Street"
}
]
},
"responses": [],
"suggestions": [],
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"external_id": "record-0",
"label": [],
"label-suggestion": null,
"label-suggestion-metadata": {
"agent": null,
"score": null,
"type": null
},
"metadata": "{\"entities\": [{\"span\": \"Reuters\", \"label\": \"ORG\", \"score\": 0.855731189250946, \"char_start_index\": 41, \"char_end_index\": 48}, {\"span\": \"Wall Street\", \"label\": \"MEDIA\", \"score\": 0.46139127016067505, \"char_start_index\": 75, \"char_end_index\": 86}]}",
"text": "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street\u0027s dwindling\\band of ultra-cynics, are seeing green again."
}
```
### Data Fields
Among the dataset fields, we differentiate between the following:
* **Fields:** These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
* **text** is of type `FieldTypes.text`.
* **Questions:** These are the questions that will be asked to the annotators. They can be of different types, such as `RatingQuestion`, `TextQuestion`, `LabelQuestion`, `MultiLabelQuestion`, and `RankingQuestion`.
* **label** is of type `QuestionTypes.label_selection` with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
* **Suggestions:** As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
* (optional) **label-suggestion** is of type `QuestionTypes.label_selection` with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
Additionally, we also have two more fields that are optional and are the following:
* **metadata:** This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the `metadata_properties` defined in the dataset configuration file in `argilla.yaml`.
* **external_id:** This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Classify the articles into one of the four categories.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | nataliaElv/news_classification_with_ents | [
"size_categories:1K<n<10K",
"rlfh",
"argilla",
"human-feedback",
"region:us"
] | 2024-01-12T14:23:17+00:00 | {"size_categories": "1K<n<10K", "tags": ["rlfh", "argilla", "human-feedback"]} | 2024-01-12T14:23:20+00:00 | [] | [] | TAGS
#size_categories-1K<n<10K #rlfh #argilla #human-feedback #region-us
| Dataset Card for news\_classification\_with\_ents
=================================================
This dataset has been created with Argilla.
As shown in the sections below, this dataset can be loaded into Argilla as explained in Load with Argilla, or used directly with the 'datasets' library in Load with 'datasets'.
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper:
* Leaderboard:
* Point of Contact:
### Dataset Summary
This dataset contains:
* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\_huggingface' method in Argilla.
* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\_huggingface' and can be loaded independently using the 'datasets' library via 'load\_dataset'.
* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.
### Load with Argilla
To load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:
### Load with 'datasets'
To load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:
### Supported Tasks and Leaderboards
This dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.
There are no leaderboards associated with this dataset.
### Languages
Dataset Structure
-----------------
### Data in Argilla
The dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.
The fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
The questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\_selection, multi\_label\_selection, or ranking.
The suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending "-suggestion" and "-suggestion-metadata" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with "-suggestion" and the metadata is appended with "-suggestion-metadata".
The metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
The guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.
### Data Instances
An example of a dataset instance in Argilla looks as follows:
While the same record in HuggingFace 'datasets' looks as follows:
### Data Fields
Among the dataset fields, we differentiate between the following:
* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.
+ text is of type 'URL'.
* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.
+ label is of type 'QuestionTypes.label\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.
+ (optional) label-suggestion is of type 'QuestionTypes.label\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].
Additionally, we also have two more fields that are optional and are the following:
* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\_properties' defined in the dataset configuration file in 'URL'.
* external\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.
### Data Splits
The dataset contains a single split, which is 'train'.
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation guidelines
Classify the articles into one of the four categories.
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
| [
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'URL'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ label is of type 'QuestionTypes.label\\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) label-suggestion is of type 'QuestionTypes.label\\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines\n\n\nClassify the articles into one of the four categories.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] | [
"TAGS\n#size_categories-1K<n<10K #rlfh #argilla #human-feedback #region-us \n",
"### Dataset Summary\n\n\nThis dataset contains:\n\n\n* A dataset configuration file conforming to the Argilla dataset format named 'URL'. This configuration file will be used to configure the dataset when using the 'FeedbackDataset.from\\_huggingface' method in Argilla.\n* Dataset records in a format compatible with HuggingFace 'datasets'. These records will be loaded automatically when using 'FeedbackDataset.from\\_huggingface' and can be loaded independently using the 'datasets' library via 'load\\_dataset'.\n* The annotation guidelines that have been used for building and curating the dataset, if they've been defined in Argilla.",
"### Load with Argilla\n\n\nTo load with Argilla, you'll just need to install Argilla as 'pip install argilla --upgrade' and then use the following code:",
"### Load with 'datasets'\n\n\nTo load this dataset with 'datasets', you'll just need to install 'datasets' as 'pip install datasets --upgrade' and then use the following code:",
"### Supported Tasks and Leaderboards\n\n\nThis dataset can contain multiple fields, questions and responses so it can be used for different NLP tasks, depending on the configuration. The dataset structure is described in the Dataset Structure section.\n\n\nThere are no leaderboards associated with this dataset.",
"### Languages\n\n\nDataset Structure\n-----------------",
"### Data in Argilla\n\n\nThe dataset is created in Argilla with: fields, questions, suggestions, metadata, vectors, and guidelines.\n\n\nThe fields are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\nThe questions are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label\\_selection, multi\\_label\\_selection, or ranking.\n\n\n\nThe suggestions are human or machine generated recommendations for each question to assist the annotator during the annotation process, so those are always linked to the existing questions, and named appending \"-suggestion\" and \"-suggestion-metadata\" to those, containing the value/s of the suggestion and its metadata, respectively. So on, the possible values are the same as in the table above, but the column name is appended with \"-suggestion\" and the metadata is appended with \"-suggestion-metadata\".\n\n\nThe metadata is a dictionary that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n\n\n\nThe guidelines, are optional as well, and are just a plain string that can be used to provide instructions to the annotators. Find those in the annotation guidelines section.",
"### Data Instances\n\n\nAn example of a dataset instance in Argilla looks as follows:\n\n\nWhile the same record in HuggingFace 'datasets' looks as follows:",
"### Data Fields\n\n\nAmong the dataset fields, we differentiate between the following:\n\n\n* Fields: These are the dataset records themselves, for the moment just text fields are supported. These are the ones that will be used to provide responses to the questions.\n\n\n\t+ text is of type 'URL'.\n* Questions: These are the questions that will be asked to the annotators. They can be of different types, such as 'RatingQuestion', 'TextQuestion', 'LabelQuestion', 'MultiLabelQuestion', and 'RankingQuestion'.\n\n\n\t+ label is of type 'QuestionTypes.label\\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].\n* Suggestions: As of Argilla 1.13.0, the suggestions have been included to provide the annotators with suggestions to ease or assist during the annotation process. Suggestions are linked to the existing questions, are always optional, and contain not just the suggestion itself, but also the metadata linked to it, if applicable.\n\n\n\t+ (optional) label-suggestion is of type 'QuestionTypes.label\\_selection' with the following allowed values ['World', 'Sports', 'Business', 'Sci/Tech'].\n\n\nAdditionally, we also have two more fields that are optional and are the following:\n\n\n* metadata: This is an optional field that can be used to provide additional information about the dataset record. This can be useful to provide additional context to the annotators, or to provide additional information about the dataset record itself. For example, you can use this to provide a link to the original source of the dataset record, or to provide additional information about the dataset record itself, such as the author, the date, or the source. The metadata is always optional, and can be potentially linked to the 'metadata\\_properties' defined in the dataset configuration file in 'URL'.\n* external\\_id: This is an optional field that can be used to provide an external ID for the dataset record. This can be useful if you want to link the dataset record to an external resource, such as a database or a file.",
"### Data Splits\n\n\nThe dataset contains a single split, which is 'train'.\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation guidelines\n\n\nClassify the articles into one of the four categories.",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
b3d0a9ea8a3296ab467a277c5610a108f2935117 | ## The news lens:
Start from:
Gender、tech | benchang1110/Taiwan-news | [
"region:us"
] | 2024-01-12T14:40:42+00:00 | {} | 2024-01-12T14:55:18+00:00 | [] | [] | TAGS
#region-us
| ## The news lens:
Start from:
Gender、tech | [
"## The news lens:\nStart from:\nGender、tech"
] | [
"TAGS\n#region-us \n",
"## The news lens:\nStart from:\nGender、tech"
] |
3e8c93244b39635253c47ac24ede2ca05c745b4c | # Dataset Card for "vsums_synthetic_gpt4_peudolabeled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Xapien/vsums_synthetic_gpt4_peudolabeled | [
"region:us"
] | 2024-01-12T14:58:23+00:00 | {"dataset_info": {"features": [{"name": "summary_a", "dtype": "string"}, {"name": "summary_b", "dtype": "string"}, {"name": "likihood_label", "dtype": "float64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 141502, "num_examples": 513}], "download_size": 62463, "dataset_size": 141502}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T14:58:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vsums_synthetic_gpt4_peudolabeled"
More Information needed | [
"# Dataset Card for \"vsums_synthetic_gpt4_peudolabeled\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vsums_synthetic_gpt4_peudolabeled\"\n\nMore Information needed"
] |
5e90340ecfa0f20c7c0587f462f5aefc8a634407 | # Chest X-ray
The dataset consists of .dcm files containing **X-ray images of the thorax**. The images are **labeled** by the doctors and accompanied by corresponding annotations in JSON format. The annotations provide detailed information about the **organ structures** present in the chest X-ray images.

# 💴 For Commercial Usage: Full version of the dataset includes 400+ chest x-rays of people with different conditions, leave a request on **[TrainingData](https://trainingdata.pro/data-market/chest-x-ray-dataset?utm_source=huggingface&utm_medium=cpc&utm_campaign=chest-x-rays-oxy)** to buy the dataset
### Types of diseases and conditions in the dataset:
- **Petrifications**
- **Nodule/mass**
- **Infiltration/Consolidation**
- **Fibrosis**
- **Dissemination**
- **Pleural effusion**
- **Hilar enlargement**
- **Annular shadows**
- **Healed rib fracture**
- **Enlarged medinastium**
- **Rib fractures**
- **Pneumothorax**
- **Atelectasis**
## Statistics for the dataset:

The dataset aims to aid in the development and evaluation of algorithms for **automated detection and classification** of thoracic organ **abnormalities and diseases**.
The dataset is valuable for research in **neurology, radiology, and oncology**. It allows the development and evaluation of computer-based algorithms, machine learning models, and deep learning techniques for **automated detection, diagnosis, and classification** of these conditions.
# 💴 Buy the Dataset: This is just an example of the data. Leave a request on [https://trainingdata.pro/data-market](https://trainingdata.pro/data-market/chest-x-ray-dataset?utm_source=huggingface&utm_medium=cpc&utm_campaign=chest-x-rays-oxy) to discuss your requirements, learn about the price and buy the dataset
# Content
### The dataset includes:
- **files**: includes x-ray scans in .dcm format,
- **annotations**: includes annotations in JSON format made for files in the previous folder,
- **visualizations**: includes visualizations of the annotations,
- **.csv file**: includes links to the fies and metadata
### File with the extension .csv includes the following information for each media file:
- **dcm_path**: link to access the .dcm file,
- **annotation_path**: link to access the file with annotation in JSON-format,
- **age**: age of the person in the x-ray scan,
- **sex**: gender of the person in the x-ray scan,
- **StudyInstanceUID**: id of the study,
- **Nodule/mass**: wheter nodule/mass is observed,
- **Dissemination**: wheter dissemination is observed,
- **Annular shadows**: wheter annular shadows are observed,
- **Petrifications**: wheter petrifications are observed,
- **Pleural effusion**: wheter pleural effusion is observed
# Medical data might be collected in accordance with your requirements.
## [TrainingData](https://trainingdata.pro/data-market/chest-x-ray-dataset?utm_source=huggingface&utm_medium=cpc&utm_campaign=chest-x-rays-oxy) provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **<https://www.kaggle.com/trainingdatapro/datasets>**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
*keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, face recognition, face detection, face identification, human video dataset, video dataset, presentation attack detection, presentation attack dataset, 2d print attacks, print 2d attacks dataset, printed 2d masks dataset, spoofing in 2D face recognition, facial masks, 2D face recognition systems, detecting face spoofing attacks, detecting presentation attacks, computer vision, surveillance face anti-spoofing, face liveness detection software solution, silicone masks attacks* | TrainingDataPro/chest-x-rays-dataset | [
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2024-01-12T15:08:35+00:00 | {"license": "cc-by-nc-nd-4.0"} | 2024-01-12T15:11:07+00:00 | [] | [] | TAGS
#license-cc-by-nc-nd-4.0 #region-us
| # Chest X-ray
The dataset consists of .dcm files containing X-ray images of the thorax. The images are labeled by the doctors and accompanied by corresponding annotations in JSON format. The annotations provide detailed information about the organ structures present in the chest X-ray images.
"
# Images used in this dataset are available by contacting with the corresponding author. | Proxen/OMI | [
"region:us"
] | 2024-01-12T15:13:58+00:00 | {} | 2024-02-02T12:56:58+00:00 | [] | [] | TAGS
#region-us
| # Code for paper "Evaluation of a digital ophthalmologist app built by GPT-4V(ision)"
# Images used in this dataset are available by contacting with the corresponding author. | [
"# Code for paper \"Evaluation of a digital ophthalmologist app built by GPT-4V(ision)\"",
"# Images used in this dataset are available by contacting with the corresponding author."
] | [
"TAGS\n#region-us \n",
"# Code for paper \"Evaluation of a digital ophthalmologist app built by GPT-4V(ision)\"",
"# Images used in this dataset are available by contacting with the corresponding author."
] |
9a19b53cca22e749ad80608742e93846b7a2c02f |
# XR Motion Dataset Catalogue
## Overview
The XR Motion Dataset Catalogue, accompanying our paper "Navigating the Kinematic Maze: A Comprehensive Guide to XR Motion Dataset Standards," standardizes and simplifies access to Extended Reality (XR) motion datasets. The catalogue represents our initiative to streamline the usage of kinematic data in XR research by aligning various datasets to a consistent format and structure.
### Dataset Specifications
All datasets in this catalogue have been standardized with the following specifications:
- **Coordinate System:** X (Right), Y (Up), Z (Forward)
- **Rotation Representation:** Quaternions
- **Units of Measurement:** Centimeters for spatial data
- **Time Encoding:** Milliseconds for time-related data
These specifications ensure uniformity and comparability across all datasets in the catalogue.
### Conversion Scripts Repository
The alignment of datasets was facilitated by a series of conversion scripts, which are available in our GitHub repository: [XR Motion Dataset Conversion Scripts](https://github.com/cschell/xr-motion-dataset-conversion-scripts). These scripts detail the process of aligning attribute names, coordinate systems, rotation representations, units of measurement, and time encoding.
### Included Datasets
The catalogue includes the following datasets:
1. [LiebersBeatSaber23](https://doi.org/10.1145/3611659.3615696)
2. [Boxrr23](https://doi.org/10.25350/B5NP4V) – *currently, only a small subset is available*
3. [LiebersHand22](https://doi.org/10.1080/10447318.2022.2120845)
4. [LiebersLabStudy21](https://doi.org/10.1145/3411764.3445528)
5. [MooreCrossDomain23](https://doi.org/10.1109/ISMAR59233.2023.00054)
6. <del>[RMillerBall22](https://github.com/Terascale-All-sensing-Research-Studio/VR-Biometric-Authentication)</del> *request for permissions pending*
7. [VrNet](http://arxiv.org/abs/2306.03381)
8. [WhoIsAlyx](https://doi.org/10.3389/frvir.2023.1272234)
## Installation and Usage
### Loading the Dataset with Hugging Face `datasets` Library
To load a dataset from the catalogue, use the `datasets` library in Python. For example, to load the `WhoIsAlyx` dataset:
```python
from datasets import load_dataset
dataset = load_dataset("cschell/xr-motion-dataset-catalogue", "who_is_alyx", trust_remote_code=True)
```
### Loading Individual Recordings with Pandas
To load individual recordings, you can use `pandas`. Here's an example:
```python
import pandas as pd
file_url_path = "hf://datasets/cschell/xr-motion-dataset-catalogue/who_is_alyx/player_02/2022-01-07.parquet"
recording = pd.read_parquet(file_url_path)
```
## Contributing and Feedback
Contributions and feedback are welcome to enhance the XR Motion Dataset Catalogue. Feel free to open a pull request or contact us directly.
<!--
## Citation
If you use the XR Motion Dataset Catalogue in your research, please cite our paper:
```
@article{your_paper_identifier,
title={Navigating the Kinematic Maze: A Comprehensive Guide to XR Motion Dataset Standards},
author={Your Name and Other Authors},
journal={Journal Name},
year={Year}
}
``` -->
| cschell/xr-motion-dataset-catalogue | [
"language:en",
"kinematic research",
"XR user motions",
"VR user motions",
"AR user motions",
"motions",
"arxiv:2306.03381",
"region:us"
] | 2024-01-12T15:33:50+00:00 | {"language": ["en"], "pretty_name": "XR Motion Dataset Catalogue", "tags": ["kinematic research", "XR user motions", "VR user motions", "AR user motions", "motions"]} | 2024-01-16T10:47:55+00:00 | [
"2306.03381"
] | [
"en"
] | TAGS
#language-English #kinematic research #XR user motions #VR user motions #AR user motions #motions #arxiv-2306.03381 #region-us
|
# XR Motion Dataset Catalogue
## Overview
The XR Motion Dataset Catalogue, accompanying our paper "Navigating the Kinematic Maze: A Comprehensive Guide to XR Motion Dataset Standards," standardizes and simplifies access to Extended Reality (XR) motion datasets. The catalogue represents our initiative to streamline the usage of kinematic data in XR research by aligning various datasets to a consistent format and structure.
### Dataset Specifications
All datasets in this catalogue have been standardized with the following specifications:
- Coordinate System: X (Right), Y (Up), Z (Forward)
- Rotation Representation: Quaternions
- Units of Measurement: Centimeters for spatial data
- Time Encoding: Milliseconds for time-related data
These specifications ensure uniformity and comparability across all datasets in the catalogue.
### Conversion Scripts Repository
The alignment of datasets was facilitated by a series of conversion scripts, which are available in our GitHub repository: XR Motion Dataset Conversion Scripts. These scripts detail the process of aligning attribute names, coordinate systems, rotation representations, units of measurement, and time encoding.
### Included Datasets
The catalogue includes the following datasets:
1. LiebersBeatSaber23
2. Boxrr23 – *currently, only a small subset is available*
3. LiebersHand22
4. LiebersLabStudy21
5. MooreCrossDomain23
6. <del>RMillerBall22</del> *request for permissions pending*
7. VrNet
8. WhoIsAlyx
## Installation and Usage
### Loading the Dataset with Hugging Face 'datasets' Library
To load a dataset from the catalogue, use the 'datasets' library in Python. For example, to load the 'WhoIsAlyx' dataset:
### Loading Individual Recordings with Pandas
To load individual recordings, you can use 'pandas'. Here's an example:
## Contributing and Feedback
Contributions and feedback are welcome to enhance the XR Motion Dataset Catalogue. Feel free to open a pull request or contact us directly.
| [
"# XR Motion Dataset Catalogue",
"## Overview\n\nThe XR Motion Dataset Catalogue, accompanying our paper \"Navigating the Kinematic Maze: A Comprehensive Guide to XR Motion Dataset Standards,\" standardizes and simplifies access to Extended Reality (XR) motion datasets. The catalogue represents our initiative to streamline the usage of kinematic data in XR research by aligning various datasets to a consistent format and structure.",
"### Dataset Specifications\nAll datasets in this catalogue have been standardized with the following specifications:\n- Coordinate System: X (Right), Y (Up), Z (Forward)\n- Rotation Representation: Quaternions\n- Units of Measurement: Centimeters for spatial data\n- Time Encoding: Milliseconds for time-related data\n\nThese specifications ensure uniformity and comparability across all datasets in the catalogue.",
"### Conversion Scripts Repository\nThe alignment of datasets was facilitated by a series of conversion scripts, which are available in our GitHub repository: XR Motion Dataset Conversion Scripts. These scripts detail the process of aligning attribute names, coordinate systems, rotation representations, units of measurement, and time encoding.",
"### Included Datasets\nThe catalogue includes the following datasets:\n1. LiebersBeatSaber23\n2. Boxrr23 – *currently, only a small subset is available*\n3. LiebersHand22\n4. LiebersLabStudy21\n5. MooreCrossDomain23\n6. <del>RMillerBall22</del> *request for permissions pending*\n7. VrNet\n8. WhoIsAlyx",
"## Installation and Usage",
"### Loading the Dataset with Hugging Face 'datasets' Library\nTo load a dataset from the catalogue, use the 'datasets' library in Python. For example, to load the 'WhoIsAlyx' dataset:",
"### Loading Individual Recordings with Pandas\nTo load individual recordings, you can use 'pandas'. Here's an example:",
"## Contributing and Feedback\nContributions and feedback are welcome to enhance the XR Motion Dataset Catalogue. Feel free to open a pull request or contact us directly."
] | [
"TAGS\n#language-English #kinematic research #XR user motions #VR user motions #AR user motions #motions #arxiv-2306.03381 #region-us \n",
"# XR Motion Dataset Catalogue",
"## Overview\n\nThe XR Motion Dataset Catalogue, accompanying our paper \"Navigating the Kinematic Maze: A Comprehensive Guide to XR Motion Dataset Standards,\" standardizes and simplifies access to Extended Reality (XR) motion datasets. The catalogue represents our initiative to streamline the usage of kinematic data in XR research by aligning various datasets to a consistent format and structure.",
"### Dataset Specifications\nAll datasets in this catalogue have been standardized with the following specifications:\n- Coordinate System: X (Right), Y (Up), Z (Forward)\n- Rotation Representation: Quaternions\n- Units of Measurement: Centimeters for spatial data\n- Time Encoding: Milliseconds for time-related data\n\nThese specifications ensure uniformity and comparability across all datasets in the catalogue.",
"### Conversion Scripts Repository\nThe alignment of datasets was facilitated by a series of conversion scripts, which are available in our GitHub repository: XR Motion Dataset Conversion Scripts. These scripts detail the process of aligning attribute names, coordinate systems, rotation representations, units of measurement, and time encoding.",
"### Included Datasets\nThe catalogue includes the following datasets:\n1. LiebersBeatSaber23\n2. Boxrr23 – *currently, only a small subset is available*\n3. LiebersHand22\n4. LiebersLabStudy21\n5. MooreCrossDomain23\n6. <del>RMillerBall22</del> *request for permissions pending*\n7. VrNet\n8. WhoIsAlyx",
"## Installation and Usage",
"### Loading the Dataset with Hugging Face 'datasets' Library\nTo load a dataset from the catalogue, use the 'datasets' library in Python. For example, to load the 'WhoIsAlyx' dataset:",
"### Loading Individual Recordings with Pandas\nTo load individual recordings, you can use 'pandas'. Here's an example:",
"## Contributing and Feedback\nContributions and feedback are welcome to enhance the XR Motion Dataset Catalogue. Feel free to open a pull request or contact us directly."
] |
e46a671b3bffedb54605dc09988abfeae3e204d2 | # Dataset Card for "rap_phase2_11jan_15i_v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | am-infoweb/rap_phase2_11jan_15i_v2 | [
"region:us"
] | 2024-01-12T15:38:05+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30152327.25, "num_examples": 25740}, {"name": "test", "num_bytes": 10050775.75, "num_examples": 8580}], "download_size": 18151160, "dataset_size": 40203103.0}} | 2024-01-12T15:38:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "rap_phase2_11jan_15i_v2"
More Information needed | [
"# Dataset Card for \"rap_phase2_11jan_15i_v2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"rap_phase2_11jan_15i_v2\"\n\nMore Information needed"
] |
0c3b3e40203c136d8ffcff50e8d5cf4c280bca94 | # Dataset Card for "rap_phase2_11jan_15i_v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | am-infoweb/rap_phase2_11jan_15i_v3 | [
"region:us"
] | 2024-01-12T15:50:33+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30152325.0, "num_examples": 25740}, {"name": "test", "num_bytes": 10050775.0, "num_examples": 8580}], "download_size": 18164365, "dataset_size": 40203100.0}} | 2024-01-12T15:50:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "rap_phase2_11jan_15i_v3"
More Information needed | [
"# Dataset Card for \"rap_phase2_11jan_15i_v3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"rap_phase2_11jan_15i_v3\"\n\nMore Information needed"
] |
0b7d026db30b52cb3007ec02234c0acfc551714d |
This repository outlines the methodology for creating training sets aimed at aligning a language model with a specific character and persona.
The process involves utilizing a Direct Preference Optimization (DPO) dataset to steer the model towards embodying the defined character and persona traits.
Following this, a Unified Neutral Alignment (UNA) dataset is employed to moderate any excessive sentiments resulting from the DPO training.
The final step involves merging the model realigned with the UNA dataset into the original DPO-trained model, forming a Unified Language Model Alignment (ULMA).
### DPO Training Set (Target Character and Persona)
1. **Define Character and Persona**:
Precisely define the traits, behaviors, and speech patterns of the intended character and persona, including language style, tone, typical responses, and unique characteristics.
2. **Dataset Construction**:
Develop a dataset that reflects these characteristics through dialogues, monologues, and interactions typical of the persona. Ensure the dataset's diversity to encompass various scenarios and responses.
3. **Annotation**:
Label each dataset instance with preference scores or binary labels, indicating its alignment with the target persona for effective DPO implementation.
### UNA Training Set (Neutralizing Extremes)
1. **Identify Extremes**:
Identify extreme positive or negative sentiments in the context of your character, such as overly aggressive or excessively submissive language.
2. **Neutral Dataset**:
Build a dataset representing neutral interactions and responses, focusing on language and replies that are balanced and free from identified extremes.
3. **Annotation for Neutrality**:
Annotate the dataset to promote a neutral, balanced language style, possibly employing a point-wise preference approach similar to DPO.
### Training and Merging Models
1. **Train Separate Models**:
Train one model using the DPO dataset and subsequently realign it using the UNA dataset. Each model will learn distinct aspects: character alignment and neutralization of extremes.
2. **Merging Models**:
Combining two independently trained models into a single unified model is complex and often requires sophisticated techniques and deep understanding of model architectures. For this, we employ the `LazyMergeKit`.
3. **Evaluation and Adjustment**:
Post-merging, assess the unified model to verify if it achieves the intended balance. Iterative refinement of the training datasets and merging process might be necessary based on evaluation outcomes. | AI-B/CHI | [
"license:unlicense",
"UNA",
"DPO",
"ULMA",
"region:us"
] | 2024-01-12T15:54:28+00:00 | {"license": "unlicense", "pretty_name": "CHI", "tags": ["UNA", "DPO", "ULMA"]} | 2024-01-20T05:54:29+00:00 | [] | [] | TAGS
#license-unlicense #UNA #DPO #ULMA #region-us
|
This repository outlines the methodology for creating training sets aimed at aligning a language model with a specific character and persona.
The process involves utilizing a Direct Preference Optimization (DPO) dataset to steer the model towards embodying the defined character and persona traits.
Following this, a Unified Neutral Alignment (UNA) dataset is employed to moderate any excessive sentiments resulting from the DPO training.
The final step involves merging the model realigned with the UNA dataset into the original DPO-trained model, forming a Unified Language Model Alignment (ULMA).
### DPO Training Set (Target Character and Persona)
1. Define Character and Persona:
Precisely define the traits, behaviors, and speech patterns of the intended character and persona, including language style, tone, typical responses, and unique characteristics.
2. Dataset Construction:
Develop a dataset that reflects these characteristics through dialogues, monologues, and interactions typical of the persona. Ensure the dataset's diversity to encompass various scenarios and responses.
3. Annotation:
Label each dataset instance with preference scores or binary labels, indicating its alignment with the target persona for effective DPO implementation.
### UNA Training Set (Neutralizing Extremes)
1. Identify Extremes:
Identify extreme positive or negative sentiments in the context of your character, such as overly aggressive or excessively submissive language.
2. Neutral Dataset:
Build a dataset representing neutral interactions and responses, focusing on language and replies that are balanced and free from identified extremes.
3. Annotation for Neutrality:
Annotate the dataset to promote a neutral, balanced language style, possibly employing a point-wise preference approach similar to DPO.
### Training and Merging Models
1. Train Separate Models:
Train one model using the DPO dataset and subsequently realign it using the UNA dataset. Each model will learn distinct aspects: character alignment and neutralization of extremes.
2. Merging Models:
Combining two independently trained models into a single unified model is complex and often requires sophisticated techniques and deep understanding of model architectures. For this, we employ the 'LazyMergeKit'.
3. Evaluation and Adjustment:
Post-merging, assess the unified model to verify if it achieves the intended balance. Iterative refinement of the training datasets and merging process might be necessary based on evaluation outcomes. | [
"### DPO Training Set (Target Character and Persona)\n\n1. Define Character and Persona:\n Precisely define the traits, behaviors, and speech patterns of the intended character and persona, including language style, tone, typical responses, and unique characteristics.\n\n2. Dataset Construction:\n Develop a dataset that reflects these characteristics through dialogues, monologues, and interactions typical of the persona. Ensure the dataset's diversity to encompass various scenarios and responses.\n\n3. Annotation:\n Label each dataset instance with preference scores or binary labels, indicating its alignment with the target persona for effective DPO implementation.",
"### UNA Training Set (Neutralizing Extremes)\n\n1. Identify Extremes:\n Identify extreme positive or negative sentiments in the context of your character, such as overly aggressive or excessively submissive language.\n\n2. Neutral Dataset:\n Build a dataset representing neutral interactions and responses, focusing on language and replies that are balanced and free from identified extremes.\n\n3. Annotation for Neutrality:\n Annotate the dataset to promote a neutral, balanced language style, possibly employing a point-wise preference approach similar to DPO.",
"### Training and Merging Models\n\n1. Train Separate Models:\n Train one model using the DPO dataset and subsequently realign it using the UNA dataset. Each model will learn distinct aspects: character alignment and neutralization of extremes.\n\n2. Merging Models:\n Combining two independently trained models into a single unified model is complex and often requires sophisticated techniques and deep understanding of model architectures. For this, we employ the 'LazyMergeKit'.\n\n3. Evaluation and Adjustment:\n Post-merging, assess the unified model to verify if it achieves the intended balance. Iterative refinement of the training datasets and merging process might be necessary based on evaluation outcomes."
] | [
"TAGS\n#license-unlicense #UNA #DPO #ULMA #region-us \n",
"### DPO Training Set (Target Character and Persona)\n\n1. Define Character and Persona:\n Precisely define the traits, behaviors, and speech patterns of the intended character and persona, including language style, tone, typical responses, and unique characteristics.\n\n2. Dataset Construction:\n Develop a dataset that reflects these characteristics through dialogues, monologues, and interactions typical of the persona. Ensure the dataset's diversity to encompass various scenarios and responses.\n\n3. Annotation:\n Label each dataset instance with preference scores or binary labels, indicating its alignment with the target persona for effective DPO implementation.",
"### UNA Training Set (Neutralizing Extremes)\n\n1. Identify Extremes:\n Identify extreme positive or negative sentiments in the context of your character, such as overly aggressive or excessively submissive language.\n\n2. Neutral Dataset:\n Build a dataset representing neutral interactions and responses, focusing on language and replies that are balanced and free from identified extremes.\n\n3. Annotation for Neutrality:\n Annotate the dataset to promote a neutral, balanced language style, possibly employing a point-wise preference approach similar to DPO.",
"### Training and Merging Models\n\n1. Train Separate Models:\n Train one model using the DPO dataset and subsequently realign it using the UNA dataset. Each model will learn distinct aspects: character alignment and neutralization of extremes.\n\n2. Merging Models:\n Combining two independently trained models into a single unified model is complex and often requires sophisticated techniques and deep understanding of model architectures. For this, we employ the 'LazyMergeKit'.\n\n3. Evaluation and Adjustment:\n Post-merging, assess the unified model to verify if it achieves the intended balance. Iterative refinement of the training datasets and merging process might be necessary based on evaluation outcomes."
] |
14cac705d08a68f1df8eb197b57a9f98ae920e54 |
# LitBank
- Project: https://github.com/dbamman/litbank
- Data source: https://github.com/dbamman/litbank/commit/3e50db0ffc033d7ccbb94f4d88f6b99210328ed8
- Crossval splits source: https://github.com/dbamman/lrec2020-coref/commit/e30de53743d36d1ea2c9e7292c69477fa332713c
## Details
Ten configs of the form f"split_{X}" where X is in range(10)
### Features
```
{'coref_chains': List[List[List[int]]] # list of clusters, each cluster is a list of mentions, each mention is a list of [sent_idx, start, end] inclusive
'doc_name': str
'entities': List[List[{'bio_tags': List[str]
'token': str}]], # list of sentences, each sentence is a list of tokens, each token has a list of bio tags and the token
'events': List[List[{'is_event': bool,
'token': str}]], # list of sentences, each sentence is a list of tokens, each token contains is_event and the token
'meta_info': {'author': str,
'date': str,
'gutenberg_id': str,
'title': str},
'original_text': str,
'quotes': List[{'attribution': str,
'end': {'sent_id': str,
'token_id': str},
'quotation': str,
'quote_id': str,
'start': {'sent_id': str,
'token_id': str}}],
'sentences': List[List[str]],
}
```
## Citation
```
@inproceedings{bamman-etal-2019-annotated,
title = "An annotated dataset of literary entities",
author = "Bamman, David and
Popat, Sejal and
Shen, Sheng",
editor = "Burstein, Jill and
Doran, Christy and
Solorio, Thamar",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)",
month = jun,
year = "2019",
address = "Minneapolis, Minnesota",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N19-1220",
doi = "10.18653/v1/N19-1220",
pages = "2138--2144",
abstract = "We present a new dataset comprised of 210,532 tokens evenly drawn from 100 different English-language literary texts annotated for ACE entity categories (person, location, geo-political entity, facility, organization, and vehicle). These categories include non-named entities (such as {``}the boy{''}, {``}the kitchen{''}) and nested structure (such as [[the cook]{'}s sister]). In contrast to existing datasets built primarily on news (focused on geo-political entities and organizations), literary texts offer strikingly different distributions of entity categories, with much stronger emphasis on people and description of settings. We present empirical results demonstrating the performance of nested entity recognition models in this domain; training natively on in-domain literary data yields an improvement of over 20 absolute points in F-score (from 45.7 to 68.3), and mitigates a disparate impact in performance for male and female entities present in models trained on news data.",
}
```
### Event detection
```
@inproceedings{sims-etal-2019-literary,
title = "Literary Event Detection",
author = "Sims, Matthew and
Park, Jong Ho and
Bamman, David",
editor = "Korhonen, Anna and
Traum, David and
M{\`a}rquez, Llu{\'\i}s",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1353",
doi = "10.18653/v1/P19-1353",
pages = "3623--3634",
abstract = "In this work we present a new dataset of literary events{---}events that are depicted as taking place within the imagined space of a novel. While previous work has focused on event detection in the domain of contemporary news, literature poses a number of complications for existing systems, including complex narration, the depiction of a broad array of mental states, and a strong emphasis on figurative language. We outline the annotation decisions of this new dataset and compare several models for predicting events; the best performing model, a bidirectional LSTM with BERT token representations, achieves an F1 score of 73.9. We then apply this model to a corpus of novels split across two dimensions{---}prestige and popularity{---}and demonstrate that there are statistically significant differences in the distribution of events for prestige.",
}
```
### Coreference
```
@inproceedings{bamman-etal-2020-annotated,
title = "An Annotated Dataset of Coreference in {E}nglish Literature",
author = "Bamman, David and
Lewke, Olivia and
Mansoor, Anya",
editor = "Calzolari, Nicoletta and
B{\'e}chet, Fr{\'e}d{\'e}ric and
Blache, Philippe and
Choukri, Khalid and
Cieri, Christopher and
Declerck, Thierry and
Goggi, Sara and
Isahara, Hitoshi and
Maegaard, Bente and
Mariani, Joseph and
Mazo, H{\'e}l{\`e}ne and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Twelfth Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.6",
pages = "44--54",
abstract = "We present in this work a new dataset of coreference annotations for works of literature in English, covering 29,103 mentions in 210,532 tokens from 100 works of fiction published between 1719 and 1922. This dataset differs from previous coreference corpora in containing documents whose average length (2,105.3 words) is four times longer than other benchmark datasets (463.7 for OntoNotes), and contains examples of difficult coreference problems common in literature. This dataset allows for an evaluation of cross-domain performance for the task of coreference resolution, and analysis into the characteristics of long-distance within-document coreference.",
language = "English",
ISBN = "979-10-95546-34-4",
}
``` | coref-data/litbank_raw | [
"license:cc-by-4.0",
"region:us"
] | 2024-01-12T16:20:06+00:00 | {"license": "cc-by-4.0", "configs": [{"config_name": "split_0", "data_files": [{"split": "train", "path": "split_0/train-*"}, {"split": "validation", "path": "split_0/validation-*"}, {"split": "test", "path": "split_0/test-*"}]}, {"config_name": "split_1", "data_files": [{"split": "train", "path": "split_1/train-*"}, {"split": "validation", "path": "split_1/validation-*"}, {"split": "test", "path": "split_1/test-*"}]}, {"config_name": "split_2", "data_files": [{"split": "train", "path": "split_2/train-*"}, {"split": "validation", "path": "split_2/validation-*"}, {"split": "test", "path": "split_2/test-*"}]}, {"config_name": "split_3", "data_files": [{"split": "train", "path": "split_3/train-*"}, {"split": "validation", "path": "split_3/validation-*"}, {"split": "test", "path": "split_3/test-*"}]}, {"config_name": "split_4", "data_files": [{"split": "train", "path": "split_4/train-*"}, {"split": "validation", "path": "split_4/validation-*"}, {"split": "test", "path": "split_4/test-*"}]}, {"config_name": "split_5", "data_files": [{"split": "train", "path": "split_5/train-*"}, {"split": "validation", "path": "split_5/validation-*"}, {"split": "test", "path": "split_5/test-*"}]}, {"config_name": "split_6", "data_files": [{"split": "train", "path": "split_6/train-*"}, {"split": "validation", "path": "split_6/validation-*"}, {"split": "test", "path": "split_6/test-*"}]}, {"config_name": "split_7", "data_files": [{"split": "train", "path": "split_7/train-*"}, {"split": "validation", "path": "split_7/validation-*"}, {"split": "test", "path": "split_7/test-*"}]}, {"config_name": "split_8", "data_files": [{"split": "train", "path": "split_8/train-*"}, {"split": "validation", "path": "split_8/validation-*"}, {"split": "test", "path": "split_8/test-*"}]}, {"config_name": "split_9", "data_files": [{"split": "train", "path": "split_9/train-*"}, {"split": "validation", "path": "split_9/validation-*"}, {"split": "test", "path": "split_9/test-*"}]}]} | 2024-01-21T03:21:59+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
# LitBank
- Project: URL
- Data source: URL
- Crossval splits source: URL
## Details
Ten configs of the form f"split_{X}" where X is in range(10)
### Features
### Event detection
### Coreference
| [
"# LitBank\n\n- Project: URL\n- Data source: URL\n- Crossval splits source: URL",
"## Details\n\nTen configs of the form f\"split_{X}\" where X is in range(10)",
"### Features",
"### Event detection",
"### Coreference"
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# LitBank\n\n- Project: URL\n- Data source: URL\n- Crossval splits source: URL",
"## Details\n\nTen configs of the form f\"split_{X}\" where X is in range(10)",
"### Features",
"### Event detection",
"### Coreference"
] |
b88a83fd0711b512750f1111bf6853272602e93a | # Dataset Card for "rap_phase2_11jan_15i_v4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | am-infoweb/rap_phase2_11jan_15i_v4 | [
"region:us"
] | 2024-01-12T16:22:56+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30269424.0, "num_examples": 25740}, {"name": "test", "num_bytes": 10089808.0, "num_examples": 8580}], "download_size": 18196547, "dataset_size": 40359232.0}} | 2024-01-12T16:23:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "rap_phase2_11jan_15i_v4"
More Information needed | [
"# Dataset Card for \"rap_phase2_11jan_15i_v4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"rap_phase2_11jan_15i_v4\"\n\nMore Information needed"
] |
ae97a157378c1f5a336498f8b676f1a307d717a5 | See: https://github.com/Jackiexiao/tts-frontend-dataset | realjackiexiao/tts-frontend-dataset | [
"license:mit",
"region:us"
] | 2024-01-12T17:03:52+00:00 | {"license": "mit"} | 2024-01-12T17:30:36+00:00 | [] | [] | TAGS
#license-mit #region-us
| See: URL | [] | [
"TAGS\n#license-mit #region-us \n"
] |
ea926adaf67fba95d30cbd0c5754264e9d61c25d |
Saving the large 3D CT images via Git Large File Storage and visualizing the imaging online: [https://shuyuej.com/Medical-Imaging-Visualizer](https://shuyuej.com/Medical-Imaging-Visualizer). | shuyuej/testing_ct_scan_images | [
"license:apache-2.0",
"region:us"
] | 2024-01-12T17:12:44+00:00 | {"license": "apache-2.0"} | 2024-01-25T19:43:20+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
Saving the large 3D CT images via Git Large File Storage and visualizing the imaging online: URL | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
4f3cf92063e592eee34e402a1863853711720512 |
Expanded now to contain 7750 super high-quality samples. Enjoy responsibly!
# Our 33B-v1.1 model is now live (We'll always be serving the newest model on our web app)!
33B-v1.1 model comes with a "Prompt Enhancement" feature. Access at: https://www.whiterabbitneo.com/
# Our Discord Server
Join us at: https://discord.gg/8Ynkrcbk92 (Updated on Dec 29th. Now permanent link to join)
# Apache-2.0 + WhiteRabbitNeo Extended Version
# Licence: Usage Restrictions
```
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
- For military use in any way;
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
- To generate or disseminate verifiably false information and/or content with the purpose of harming others;
- To generate or disseminate inappropriate content subject to applicable regulatory requirements;
- To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
- To defame, disparage or otherwise harass others;
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
``` | WhiteRabbitNeo/WRN-Chapter-1 | [
"license:other",
"region:us"
] | 2024-01-12T17:19:41+00:00 | {"license": "other"} | 2024-01-16T16:06:05+00:00 | [] | [] | TAGS
#license-other #region-us
|
Expanded now to contain 7750 super high-quality samples. Enjoy responsibly!
# Our 33B-v1.1 model is now live (We'll always be serving the newest model on our web app)!
33B-v1.1 model comes with a "Prompt Enhancement" feature. Access at: URL
# Our Discord Server
Join us at: URL (Updated on Dec 29th. Now permanent link to join)
# Apache-2.0 + WhiteRabbitNeo Extended Version
# Licence: Usage Restrictions
| [
"# Our 33B-v1.1 model is now live (We'll always be serving the newest model on our web app)!\n33B-v1.1 model comes with a \"Prompt Enhancement\" feature. Access at: URL",
"# Our Discord Server\nJoin us at: URL (Updated on Dec 29th. Now permanent link to join)",
"# Apache-2.0 + WhiteRabbitNeo Extended Version",
"# Licence: Usage Restrictions"
] | [
"TAGS\n#license-other #region-us \n",
"# Our 33B-v1.1 model is now live (We'll always be serving the newest model on our web app)!\n33B-v1.1 model comes with a \"Prompt Enhancement\" feature. Access at: URL",
"# Our Discord Server\nJoin us at: URL (Updated on Dec 29th. Now permanent link to join)",
"# Apache-2.0 + WhiteRabbitNeo Extended Version",
"# Licence: Usage Restrictions"
] |
6c4b8a76b3247b926d85d06e816c96ecc426e218 | The dataset contains three folders: Train, Test, and Validation. Each of these folders includes two subfolders—one for mask images and another for images without masks. | sumitpardhiya/Face-Mask-Detection | [
"license:apache-2.0",
"region:us"
] | 2024-01-12T17:30:13+00:00 | {"license": "apache-2.0"} | 2024-01-13T06:32:17+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| The dataset contains three folders: Train, Test, and Validation. Each of these folders includes two subfolders—one for mask images and another for images without masks. | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
847259530b52c39d1d31bc91ab937568f04509cb |
# Dataset Card for MS COCO Karpathy in Simple English
This dataset contains captions that were rephrased into simple english so that a young child would understand it.
## Dataset Details
### Dataset Description
- **Curated by:** {{ curators | default("[More Information Needed]", true)}}
- **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}}
- **License:** {{ license | default("[More Information Needed]", true)}}
### Dataset Sources
The processed [MS COCO datasets](https://cocodataset.org/#download) (Karpathy Split) in this repo are based on the following sources:
| Type | MD5 | URL |
|------------|----------------------------------|-----------------------------------------------------------------------------------------------|
| Train | aa31ac474cf6250ebb81d18348a07ed8 | https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_train.json |
| Validation | b273847456ef5580e33713b1f7de52a0 | https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val.json |
| Test | 3ff34b0ef2db02d01c37399f6a2a6cd1 | https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test.json |
MS COCO:
- **Download:** https://cocodataset.org/#download
- **Paper:** http://arxiv.org/abs/1405.0312
## Dataset Creation
This dataset was generated by processing the annotations via [Mistal7B](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-AWQ).
Prompt used:
```
Rewrite the sentence " + caption + " for a 3 to 4 year old child. Give only one simple sentence. Don't use the word see. Give only a single answer.
```
A filter was applied to only store captions which matched the common output format. A best effort filter was applied to reduce the chance of including multiple example sentences in the output.
### Curation Rationale
This dataset is useful for experiments with small LLMs which have only a reduced corpus. The dataset is suitable to be used for LAVIS experiments (QFormer Training) with a finetuned TinyStories 33M LLM.
| Jotschi/coco-karpathy-simple-en | [
"task_categories:text-generation",
"task_categories:image-to-text",
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"size_categories:n<650k",
"source_datasets:mscoco",
"language:en",
"coco",
"mscoco",
"simple-english",
"arxiv:1405.0312",
"region:us"
] | 2024-01-12T17:35:24+00:00 | {"annotations_creators": ["machine-generated"], "language": ["en"], "size_categories": ["n<650k"], "source_datasets": ["mscoco"], "task_categories": ["text-generation", "image-to-text", "text-to-image"], "pretty_name": "MS COCO Karpathy in Simple English", "license_name": "cc-by-4.0", "license_link": "https://creativecommons.org/licenses/by/4.0/legalcode", "tags": ["coco", "mscoco", "simple-english"]} | 2024-01-14T13:09:39+00:00 | [
"1405.0312"
] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-image-to-text #task_categories-text-to-image #annotations_creators-machine-generated #size_categories-n<650k #source_datasets-mscoco #language-English #coco #mscoco #simple-english #arxiv-1405.0312 #region-us
| Dataset Card for MS COCO Karpathy in Simple English
===================================================
This dataset contains captions that were rephrased into simple english so that a young child would understand it.
Dataset Details
---------------
### Dataset Description
* Curated by: {{ curators | default("", true)}}
* Language(s) (NLP): {{ language | default("", true)}}
* License: {{ license | default("", true)}}
### Dataset Sources
The processed MS COCO datasets (Karpathy Split) in this repo are based on the following sources:
Type: Train, MD5: aa31ac474cf6250ebb81d18348a07ed8, URL: URL
Type: Validation, MD5: b273847456ef5580e33713b1f7de52a0, URL: URL
Type: Test, MD5: 3ff34b0ef2db02d01c37399f6a2a6cd1, URL: URL
MS COCO:
* Download: URL
* Paper: URL
Dataset Creation
----------------
This dataset was generated by processing the annotations via Mistal7B.
Prompt used:
A filter was applied to only store captions which matched the common output format. A best effort filter was applied to reduce the chance of including multiple example sentences in the output.
### Curation Rationale
This dataset is useful for experiments with small LLMs which have only a reduced corpus. The dataset is suitable to be used for LAVIS experiments (QFormer Training) with a finetuned TinyStories 33M LLM.
| [
"### Dataset Description\n\n\n* Curated by: {{ curators | default(\"\", true)}}\n* Language(s) (NLP): {{ language | default(\"\", true)}}\n* License: {{ license | default(\"\", true)}}",
"### Dataset Sources\n\n\nThe processed MS COCO datasets (Karpathy Split) in this repo are based on the following sources:\n\n\nType: Train, MD5: aa31ac474cf6250ebb81d18348a07ed8, URL: URL\nType: Validation, MD5: b273847456ef5580e33713b1f7de52a0, URL: URL\nType: Test, MD5: 3ff34b0ef2db02d01c37399f6a2a6cd1, URL: URL\n\n\nMS COCO:\n\n\n* Download: URL\n* Paper: URL\n\n\nDataset Creation\n----------------\n\n\nThis dataset was generated by processing the annotations via Mistal7B.\n\n\nPrompt used:\n\n\nA filter was applied to only store captions which matched the common output format. A best effort filter was applied to reduce the chance of including multiple example sentences in the output.",
"### Curation Rationale\n\n\nThis dataset is useful for experiments with small LLMs which have only a reduced corpus. The dataset is suitable to be used for LAVIS experiments (QFormer Training) with a finetuned TinyStories 33M LLM."
] | [
"TAGS\n#task_categories-text-generation #task_categories-image-to-text #task_categories-text-to-image #annotations_creators-machine-generated #size_categories-n<650k #source_datasets-mscoco #language-English #coco #mscoco #simple-english #arxiv-1405.0312 #region-us \n",
"### Dataset Description\n\n\n* Curated by: {{ curators | default(\"\", true)}}\n* Language(s) (NLP): {{ language | default(\"\", true)}}\n* License: {{ license | default(\"\", true)}}",
"### Dataset Sources\n\n\nThe processed MS COCO datasets (Karpathy Split) in this repo are based on the following sources:\n\n\nType: Train, MD5: aa31ac474cf6250ebb81d18348a07ed8, URL: URL\nType: Validation, MD5: b273847456ef5580e33713b1f7de52a0, URL: URL\nType: Test, MD5: 3ff34b0ef2db02d01c37399f6a2a6cd1, URL: URL\n\n\nMS COCO:\n\n\n* Download: URL\n* Paper: URL\n\n\nDataset Creation\n----------------\n\n\nThis dataset was generated by processing the annotations via Mistal7B.\n\n\nPrompt used:\n\n\nA filter was applied to only store captions which matched the common output format. A best effort filter was applied to reduce the chance of including multiple example sentences in the output.",
"### Curation Rationale\n\n\nThis dataset is useful for experiments with small LLMs which have only a reduced corpus. The dataset is suitable to be used for LAVIS experiments (QFormer Training) with a finetuned TinyStories 33M LLM."
] |
28b456f6253f05b854e730fcecde0192ca54dbb9 | 55184 french instructions generated by OpenAI GPT-3.5
in Alpaca Format to finetune general models
Created by Jonathan Pacifico
---
license: apache-2.0
---
| jpacifico/French-Alpaca-dataset-Instruct-55K | [
"region:us"
] | 2024-01-12T17:47:06+00:00 | {} | 2024-01-12T17:50:35+00:00 | [] | [] | TAGS
#region-us
| 55184 french instructions generated by OpenAI GPT-3.5
in Alpaca Format to finetune general models
Created by Jonathan Pacifico
---
license: apache-2.0
---
| [] | [
"TAGS\n#region-us \n"
] |
b3994ddb9373489fe7cf39619076ea7c4c442691 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | Ruhaan04/QnA | [
"region:us"
] | 2024-01-12T17:52:21+00:00 | {} | 2024-01-12T17:57:51+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
568bbd1e006ca84c6448969cf2ebc158ef36d4cc | # Dataset Card for "trivia_qa"
A processed version of [this](https://huggingface.co/datasets/trivia_qa) Trivia QA dataset. [Here](https://arxiv.org/abs/1705.03551) is the original paper.
* Our `test` set consists of the validation set of the unfiltered subset.
* Our `train` set consists of the training, validation, and test of the rc subset.
The `evidence` column includes a document that support the answer to the question.
| eitanturok/trivia_qa | [
"arxiv:1705.03551",
"region:us"
] | 2024-01-12T18:29:06+00:00 | {"dataset_info": {"features": [{"name": "answer", "dtype": "string"}, {"name": "aliases", "sequence": "string"}, {"name": "context", "dtype": "string"}, {"name": "evidence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4500548289, "num_examples": 156328}, {"name": "test", "num_bytes": 327028593, "num_examples": 11313}], "download_size": 2598671948, "dataset_size": 4827576882}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-01-12T18:58:48+00:00 | [
"1705.03551"
] | [] | TAGS
#arxiv-1705.03551 #region-us
| # Dataset Card for "trivia_qa"
A processed version of this Trivia QA dataset. Here is the original paper.
* Our 'test' set consists of the validation set of the unfiltered subset.
* Our 'train' set consists of the training, validation, and test of the rc subset.
The 'evidence' column includes a document that support the answer to the question.
| [
"# Dataset Card for \"trivia_qa\"\n\nA processed version of this Trivia QA dataset. Here is the original paper.\n\n* Our 'test' set consists of the validation set of the unfiltered subset.\n* Our 'train' set consists of the training, validation, and test of the rc subset.\n\nThe 'evidence' column includes a document that support the answer to the question."
] | [
"TAGS\n#arxiv-1705.03551 #region-us \n",
"# Dataset Card for \"trivia_qa\"\n\nA processed version of this Trivia QA dataset. Here is the original paper.\n\n* Our 'test' set consists of the validation set of the unfiltered subset.\n* Our 'train' set consists of the training, validation, and test of the rc subset.\n\nThe 'evidence' column includes a document that support the answer to the question."
] |
ccd8c631f11ace854a8864d495a3c78b3d73777e | The capybara dataset filtered to only conversations that include code (detected via the presence of markdown codeblocks) | cassanof/Capybara-code | [
"region:us"
] | 2024-01-12T19:34:08+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "conversation", "list": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 7680536, "num_examples": 1314}], "download_size": 3402523, "dataset_size": 7680536}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T19:39:48+00:00 | [] | [] | TAGS
#region-us
| The capybara dataset filtered to only conversations that include code (detected via the presence of markdown codeblocks) | [] | [
"TAGS\n#region-us \n"
] |
e02211ae2c635a3d87611569bb10446946d2746b | ok
<p>merge of </p>
1. Intel/orca_dpo_pairs
2. jondurbin/py-dpo-v0.1
3. unalignment/toxic-dpo-v0.2 | intone/mega-dpo | [
"task_categories:conversational",
"task_categories:text-generation",
"language:en",
"region:us"
] | 2024-01-12T19:40:37+00:00 | {"language": ["en"], "task_categories": ["conversational", "text-generation"]} | 2024-01-12T19:42:57+00:00 | [] | [
"en"
] | TAGS
#task_categories-conversational #task_categories-text-generation #language-English #region-us
| ok
<p>merge of </p>
1. Intel/orca_dpo_pairs
2. jondurbin/py-dpo-v0.1
3. unalignment/toxic-dpo-v0.2 | [] | [
"TAGS\n#task_categories-conversational #task_categories-text-generation #language-English #region-us \n"
] |
fca84f6aed2a4cbde50a1566acb7f75b1271c0cd | I made [WinterGoliath](https://huggingface.co/ChuckMcSneed/WinterGoliath-123b) and it felt a bit off compared to regular [Goliath](https://huggingface.co/alpindale/goliath-120b). I was wondering if left bias was really present in the models after that so I made an [automatic benchmark](https://github.com/ChuckMcSneed/politiscales_for_llama) using politiscales test.
# Interpreting the data
- b0/b1
: Internationalism/Nationalism
- c0/c1
: Constructivism/Essentialism
- e0/e1
: Ecology/Production
- j0/j1
: Rehabilitative Justice/Punitive Justice
- m0/m1
: Regulation/Laissez-faire
- p0/p1
: Communism/Capitalism
- s0/s1
: Progressive/Conservative
- t0/t1
: Revolution/Reform
- reli
: Religiousness
- comp
: Belief in a worldwide conspiracy
- prag
: Pragmatism
- mona
: Monarchism
- vega
: VEGAN
- anar
: Anarchism
- femi:
: Radical feminism
### Suggestions for calculations
- Whackiness of the model=anar+comp+mona+reli
- Certainty=(x0+x1), calculate average of all values
- Bias towards value(logratio)=LOG(x0/x1;2)
- Left-right bias=b_logratio+c_logratio+e_logratio+j_logratio+m_logratio+p_logratio+s_logratio-t_logratio
# Results

- Mixtral-instruct0.1 hates violence(-3.1 logratio_t!!!), religion(0.4%), pragmatic politics(1.4%), monarchy(0.6%), anarchy(5.7%) and conspiracy theories(0.2%); likes regulation(1.5 logratio_t), globalism(1.4) and ecology(0.64). The MOST vegan(72%!!!) model tested so far. EXTREME left-wing bias.
- LLAMA2-70b is fairly neutral, with slight bias to the left.
- Dicephal-123(self-merge of llama2-70B) is also fairly neutral, with slight bias to the right. Sadly, it is the most right-wing model that I tested.
- Xwin, Goliath, Nous-Hermes have low left-wing bias.
- DoubleGold and WinterGoliath have medium left-wing bias.
- WinterGoddess and Euryale have high left-wing bias.
My suspicions were correct, WinterGoliath has stronger left wing bias than Goliath and is less whacky, I cannot consider it an upgrade, just a sidegrade.
# Limitations
The tests were done without any special prompting. How models perform on the test does not reflect how they perform with the right prompting, it just shows the bias of the model.
This test does NOT measure censorship. | ChuckMcSneed/politiscales_for_llama_results | [
"license:wtfpl",
"region:us"
] | 2024-01-12T19:41:36+00:00 | {"license": "wtfpl"} | 2024-01-14T03:58:16+00:00 | [] | [] | TAGS
#license-wtfpl #region-us
| I made WinterGoliath and it felt a bit off compared to regular Goliath. I was wondering if left bias was really present in the models after that so I made an automatic benchmark using politiscales test.
# Interpreting the data
- b0/b1
: Internationalism/Nationalism
- c0/c1
: Constructivism/Essentialism
- e0/e1
: Ecology/Production
- j0/j1
: Rehabilitative Justice/Punitive Justice
- m0/m1
: Regulation/Laissez-faire
- p0/p1
: Communism/Capitalism
- s0/s1
: Progressive/Conservative
- t0/t1
: Revolution/Reform
- reli
: Religiousness
- comp
: Belief in a worldwide conspiracy
- prag
: Pragmatism
- mona
: Monarchism
- vega
: VEGAN
- anar
: Anarchism
- femi:
: Radical feminism
### Suggestions for calculations
- Whackiness of the model=anar+comp+mona+reli
- Certainty=(x0+x1), calculate average of all values
- Bias towards value(logratio)=LOG(x0/x1;2)
- Left-right bias=b_logratio+c_logratio+e_logratio+j_logratio+m_logratio+p_logratio+s_logratio-t_logratio
# Results
!URL
- Mixtral-instruct0.1 hates violence(-3.1 logratio_t!!!), religion(0.4%), pragmatic politics(1.4%), monarchy(0.6%), anarchy(5.7%) and conspiracy theories(0.2%); likes regulation(1.5 logratio_t), globalism(1.4) and ecology(0.64). The MOST vegan(72%!!!) model tested so far. EXTREME left-wing bias.
- LLAMA2-70b is fairly neutral, with slight bias to the left.
- Dicephal-123(self-merge of llama2-70B) is also fairly neutral, with slight bias to the right. Sadly, it is the most right-wing model that I tested.
- Xwin, Goliath, Nous-Hermes have low left-wing bias.
- DoubleGold and WinterGoliath have medium left-wing bias.
- WinterGoddess and Euryale have high left-wing bias.
My suspicions were correct, WinterGoliath has stronger left wing bias than Goliath and is less whacky, I cannot consider it an upgrade, just a sidegrade.
# Limitations
The tests were done without any special prompting. How models perform on the test does not reflect how they perform with the right prompting, it just shows the bias of the model.
This test does NOT measure censorship. | [
"# Interpreting the data\n- b0/b1\n: Internationalism/Nationalism\n- c0/c1\n: Constructivism/Essentialism\n- e0/e1\n: Ecology/Production\n- j0/j1\n: Rehabilitative Justice/Punitive Justice\n- m0/m1\n: Regulation/Laissez-faire\n- p0/p1\n: Communism/Capitalism\n- s0/s1\n: Progressive/Conservative\n- t0/t1\n: Revolution/Reform\n- reli\n: Religiousness\n- comp\n: Belief in a worldwide conspiracy\n- prag\n: Pragmatism\n- mona\n: Monarchism\n- vega\n: VEGAN\n- anar\n: Anarchism\n- femi:\n: Radical feminism",
"### Suggestions for calculations\n- Whackiness of the model=anar+comp+mona+reli\n- Certainty=(x0+x1), calculate average of all values\n- Bias towards value(logratio)=LOG(x0/x1;2)\n- Left-right bias=b_logratio+c_logratio+e_logratio+j_logratio+m_logratio+p_logratio+s_logratio-t_logratio",
"# Results\n!URL\n- Mixtral-instruct0.1 hates violence(-3.1 logratio_t!!!), religion(0.4%), pragmatic politics(1.4%), monarchy(0.6%), anarchy(5.7%) and conspiracy theories(0.2%); likes regulation(1.5 logratio_t), globalism(1.4) and ecology(0.64). The MOST vegan(72%!!!) model tested so far. EXTREME left-wing bias.\n- LLAMA2-70b is fairly neutral, with slight bias to the left.\n- Dicephal-123(self-merge of llama2-70B) is also fairly neutral, with slight bias to the right. Sadly, it is the most right-wing model that I tested.\n- Xwin, Goliath, Nous-Hermes have low left-wing bias.\n- DoubleGold and WinterGoliath have medium left-wing bias.\n- WinterGoddess and Euryale have high left-wing bias.\n\n\nMy suspicions were correct, WinterGoliath has stronger left wing bias than Goliath and is less whacky, I cannot consider it an upgrade, just a sidegrade.",
"# Limitations\nThe tests were done without any special prompting. How models perform on the test does not reflect how they perform with the right prompting, it just shows the bias of the model.\n\nThis test does NOT measure censorship."
] | [
"TAGS\n#license-wtfpl #region-us \n",
"# Interpreting the data\n- b0/b1\n: Internationalism/Nationalism\n- c0/c1\n: Constructivism/Essentialism\n- e0/e1\n: Ecology/Production\n- j0/j1\n: Rehabilitative Justice/Punitive Justice\n- m0/m1\n: Regulation/Laissez-faire\n- p0/p1\n: Communism/Capitalism\n- s0/s1\n: Progressive/Conservative\n- t0/t1\n: Revolution/Reform\n- reli\n: Religiousness\n- comp\n: Belief in a worldwide conspiracy\n- prag\n: Pragmatism\n- mona\n: Monarchism\n- vega\n: VEGAN\n- anar\n: Anarchism\n- femi:\n: Radical feminism",
"### Suggestions for calculations\n- Whackiness of the model=anar+comp+mona+reli\n- Certainty=(x0+x1), calculate average of all values\n- Bias towards value(logratio)=LOG(x0/x1;2)\n- Left-right bias=b_logratio+c_logratio+e_logratio+j_logratio+m_logratio+p_logratio+s_logratio-t_logratio",
"# Results\n!URL\n- Mixtral-instruct0.1 hates violence(-3.1 logratio_t!!!), religion(0.4%), pragmatic politics(1.4%), monarchy(0.6%), anarchy(5.7%) and conspiracy theories(0.2%); likes regulation(1.5 logratio_t), globalism(1.4) and ecology(0.64). The MOST vegan(72%!!!) model tested so far. EXTREME left-wing bias.\n- LLAMA2-70b is fairly neutral, with slight bias to the left.\n- Dicephal-123(self-merge of llama2-70B) is also fairly neutral, with slight bias to the right. Sadly, it is the most right-wing model that I tested.\n- Xwin, Goliath, Nous-Hermes have low left-wing bias.\n- DoubleGold and WinterGoliath have medium left-wing bias.\n- WinterGoddess and Euryale have high left-wing bias.\n\n\nMy suspicions were correct, WinterGoliath has stronger left wing bias than Goliath and is less whacky, I cannot consider it an upgrade, just a sidegrade.",
"# Limitations\nThe tests were done without any special prompting. How models perform on the test does not reflect how they perform with the right prompting, it just shows the bias of the model.\n\nThis test does NOT measure censorship."
] |
84679cb68438dd32edd7cd959aeebd470da68dc3 |
This dataset is AFAIK (12 january 2024) the biggest ready to use open source dataset to finetune LLMs. It contains more than 3.8 million chat samples.
Its a collection of multiple different datasets. Some of them have been built using GPT4 or using scraped data. Here is the list:
- `gathnex/Gath_baize`
- `teknium/openhermes`
- `nomic-ai/gpt4all-j-prompt-generations`
- `teknium/dataforge-economics`
- `Anthropic/hh-rlhf`: we kept only the selected prompts
- `teknium1_GPTeacher_codegen`
- `teknium1_GPTeacher_instruct` (0.9 version)
- `teknium1_GPTeacher_roleplay`
- `teknium1_GPTeacher_roleplay_supplemental`
- `multi_news`: we asked the model to summary the news
- `guigux/astro_qa_fr_0.1`: Dataset on astrophysics (french)
- `HuggingFaceH4/no_robots`
- `nvidia/HelpSteer`: We filtered to keep only samples with correctness >= 3 and helpfulness >= 2 and coherence >= 2
- `garage-bAInd/Open-Platypus`
- `nampdn-ai/tiny-codes`
- `abacusai/MetaMathFewshot`: Math exercises
- `guigux/medmcqa_mixtral_openchat_0.1`: Medical questions
For all the dataset, we kept the source in case you want to discard some of them.
Enjoy ! Hope this will help to build strong models !
## Possible improvement
Currently, the dataset contains mainly english samples. It could be nice if we can add / translate some samples into other languages ! | guigux/hulk_dataset_0.1 | [
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:1M<n<10M",
"region:us"
] | 2024-01-12T19:42:27+00:00 | {"size_categories": ["1M<n<10M"], "task_categories": ["conversational", "text-generation"], "pretty_name": "Hulk data 0.1", "dataset_info": {"features": [{"name": "chat_sample", "dtype": "string"}, {"name": "source", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7997901993, "num_examples": 3817036}], "download_size": 3855896748, "dataset_size": 7997901993}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-12T20:04:46+00:00 | [] | [] | TAGS
#task_categories-conversational #task_categories-text-generation #size_categories-1M<n<10M #region-us
|
This dataset is AFAIK (12 january 2024) the biggest ready to use open source dataset to finetune LLMs. It contains more than 3.8 million chat samples.
Its a collection of multiple different datasets. Some of them have been built using GPT4 or using scraped data. Here is the list:
- 'gathnex/Gath_baize'
- 'teknium/openhermes'
- 'nomic-ai/gpt4all-j-prompt-generations'
- 'teknium/dataforge-economics'
- 'Anthropic/hh-rlhf': we kept only the selected prompts
- 'teknium1_GPTeacher_codegen'
- 'teknium1_GPTeacher_instruct' (0.9 version)
- 'teknium1_GPTeacher_roleplay'
- 'teknium1_GPTeacher_roleplay_supplemental'
- 'multi_news': we asked the model to summary the news
- 'guigux/astro_qa_fr_0.1': Dataset on astrophysics (french)
- 'HuggingFaceH4/no_robots'
- 'nvidia/HelpSteer': We filtered to keep only samples with correctness >= 3 and helpfulness >= 2 and coherence >= 2
- 'garage-bAInd/Open-Platypus'
- 'nampdn-ai/tiny-codes'
- 'abacusai/MetaMathFewshot': Math exercises
- 'guigux/medmcqa_mixtral_openchat_0.1': Medical questions
For all the dataset, we kept the source in case you want to discard some of them.
Enjoy ! Hope this will help to build strong models !
## Possible improvement
Currently, the dataset contains mainly english samples. It could be nice if we can add / translate some samples into other languages ! | [
"## Possible improvement\n\nCurrently, the dataset contains mainly english samples. It could be nice if we can add / translate some samples into other languages !"
] | [
"TAGS\n#task_categories-conversational #task_categories-text-generation #size_categories-1M<n<10M #region-us \n",
"## Possible improvement\n\nCurrently, the dataset contains mainly english samples. It could be nice if we can add / translate some samples into other languages !"
] |
e41f38bc63f542c22fb54fb6d4c02f73560f1bd9 | # A new face foregery detection dataset based on FaceForensics++.
4 modern face manipulation method:
- inswapper
- simswap
- realiable faceswap
- MCNet (face reenactment)
- enhanced testset with diffusion
| yoctta/advanced_faceforensics | [
"region:us"
] | 2024-01-12T19:59:29+00:00 | {} | 2024-01-14T17:17:16+00:00 | [] | [] | TAGS
#region-us
| # A new face foregery detection dataset based on FaceForensics++.
4 modern face manipulation method:
- inswapper
- simswap
- realiable faceswap
- MCNet (face reenactment)
- enhanced testset with diffusion
| [
"# A new face foregery detection dataset based on FaceForensics++.\n\n4 modern face manipulation method:\n\n- inswapper\n- simswap\n- realiable faceswap\n- MCNet (face reenactment)\n\n- enhanced testset with diffusion"
] | [
"TAGS\n#region-us \n",
"# A new face foregery detection dataset based on FaceForensics++.\n\n4 modern face manipulation method:\n\n- inswapper\n- simswap\n- realiable faceswap\n- MCNet (face reenactment)\n\n- enhanced testset with diffusion"
] |
20307eb3e139a98e7b88e6196c9ebe2f642ddf46 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | htahir1/zenml-codegen-v1 | [
"region:us"
] | 2024-01-12T20:28:16+00:00 | {"dataset_info": {"features": [{"name": "repo_id", "dtype": "string"}, {"name": "file_path", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 11185575, "num_examples": 1820}], "download_size": 3377832, "dataset_size": 11185575}} | 2024-01-12T22:26:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.