sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
8c35b13454d43f2319e368f1fe7c97a878af4c46
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@https://huggingface.co/Luciano/bertimbau-base-lener-br-finetuned-lener-br](https://huggingface.co/https://huggingface.co/Luciano/bertimbau-base-lener-br-finetuned-lener-br) for evaluating this model.
autoevaluate/autoeval-staging-eval-lener_br-lener_br-f0f34b-15626154
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T04:06:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["lener_br"], "eval_info": {"task": "entity_extraction", "model": "Luciano/bertimbau-base-lener-br-finetuned-lener-br", "metrics": [], "dataset_name": "lener_br", "dataset_config": "lener_br", "dataset_split": "train", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-09-05T04:09:08+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br * Dataset: lener_br * Config: lener_br * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @URL for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @URL for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: Luciano/bertimbau-base-lener-br-finetuned-lener-br\n* Dataset: lener_br\n* Config: lener_br\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @URL for evaluating this model." ]
4022c7affe48f8cf58cc541414c0a35a5eadd6d8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-e82d51-15636155
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T05:33:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mse", "mae"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-05T05:37:40+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @samuelallen123 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
e0b1e4d497fe81cad3e4695ae1c6c5ca7d64656d
# AutoTrain Dataset for project: satellite-image-classification ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project satellite-image-classification. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<256x256 CMYK PIL image>", "target": 0 }, { "image": "<256x256 CMYK PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=1, names=['cloudy'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 1200 | | valid | 300 |
victor/autotrain-data-satellite-image-classification
[ "task_categories:image-classification", "region:us" ]
2022-09-05T07:58:49+00:00
{"task_categories": ["image-classification"]}
2022-09-05T08:30:13+00:00
[]
[]
TAGS #task_categories-image-classification #region-us
AutoTrain Dataset for project: satellite-image-classification ============================================================= Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project satellite-image-classification. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-image-classification #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
93c9ef572004a518c936aa13d9afbfd05b710aea
NOTE: All this data, plus a lot more, is now accessible at https://console.cloud.google.com/marketplace/product/bigquery-public-data/eumetsat-seviri-rss-hrv-uk?project=tactile-acrobat-249716 That dataset is the preferred way to access this data, as it goes back to the beginning of the RSS archive (2008-2023) and is updated on a roughly weekly basis. This dataset consists of the EUMETSAT Rapid Scan Service (RSS) imagery for 2014 to Feb 2023. This data has 2 formats, the High Resolution Visible channel (HRV) which covers Europe and North Africa at a resolution of roughly 2-3km per pixel, and is shifted each day to better image where the sun is shining, and the non-HRV data, which is comprised of 11 spectral channels at a 6-9km resolution covering the top third of the Earth centered on Europe. These images are taken 5 minutes apart and have been compressed and stacked into Zarr stores. Using Xarray, these files can be opened all together to create one large Zarr store of HRV or non-HRV imagery.
openclimatefix/eumetsat-rss
[ "size_categories:1K<n<10K", "license:other", "climate", "doi:10.57967/hf/1488", "region:us" ]
2022-09-05T08:25:53+00:00
{"license": "other", "size_categories": ["1K<n<10K"], "tags": ["climate"]}
2024-02-17T17:37:41+00:00
[]
[]
TAGS #size_categories-1K<n<10K #license-other #climate #doi-10.57967/hf/1488 #region-us
NOTE: All this data, plus a lot more, is now accessible at URL That dataset is the preferred way to access this data, as it goes back to the beginning of the RSS archive (2008-2023) and is updated on a roughly weekly basis. This dataset consists of the EUMETSAT Rapid Scan Service (RSS) imagery for 2014 to Feb 2023. This data has 2 formats, the High Resolution Visible channel (HRV) which covers Europe and North Africa at a resolution of roughly 2-3km per pixel, and is shifted each day to better image where the sun is shining, and the non-HRV data, which is comprised of 11 spectral channels at a 6-9km resolution covering the top third of the Earth centered on Europe. These images are taken 5 minutes apart and have been compressed and stacked into Zarr stores. Using Xarray, these files can be opened all together to create one large Zarr store of HRV or non-HRV imagery.
[]
[ "TAGS\n#size_categories-1K<n<10K #license-other #climate #doi-10.57967/hf/1488 #region-us \n" ]
0d5751865d26618e2141fe0aecf06477d93d0955
# ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data. This is a dataset for classification if a sentence is ADE-related (True) or not (False). **Train size: 17,637** **Test size: 5,879** [Source dataset](https://huggingface.co/datasets/ade_corpus_v2) [Paper](https://www.sciencedirect.com/science/article/pii/S1532046412000615)
SetFit/ade_corpus_v2_classification
[ "region:us" ]
2022-09-05T10:20:19+00:00
{}
2022-09-05T13:14:53+00:00
[]
[]
TAGS #region-us
# ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data. This is a dataset for classification if a sentence is ADE-related (True) or not (False). Train size: 17,637 Test size: 5,879 Source dataset Paper
[ "# ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.\nThis is a dataset for classification if a sentence is ADE-related (True) or not (False).\n\nTrain size: 17,637\n\nTest size: 5,879\n\nSource dataset\n\nPaper" ]
[ "TAGS\n#region-us \n", "# ADE-Corpus-V2 Dataset: Adverse Drug Reaction Data.\nThis is a dataset for classification if a sentence is ADE-related (True) or not (False).\n\nTrain size: 17,637\n\nTest size: 5,879\n\nSource dataset\n\nPaper" ]
6df1024387c78af81538a7223c70a8101c61d6aa
# Dataset Card for Europarl v7 (en-it split) This dataset contains only the English-Italian split of Europarl v7. We created the dataset to provide it to the [M2L 2022 Summer School](https://www.m2lschool.org/) students. For all the information on the dataset, please refer to: [https://www.statmt.org/europarl/](https://www.statmt.org/europarl/) ## Dataset Structure ### Data Fields - sent_en: English transcript - sent_it: Italian translation ### Data Splits We created three custom training/validation/testing splits. Feel free to rearrange them if needed. These ARE NOT by any means official splits. - train (1717204 pairs) - validation (190911 pairs) - test (1000 pairs) ### Citation Information If using the dataset, please cite: `Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In Proceedings of machine translation summit x: papers (pp. 79-86).` ### Contributions Thanks to [@g8a9](https://github.com/g8a9) for adding this dataset.
g8a9/europarl_en-it
[ "task_categories:translation", "multilinguality:monolingual", "multilinguality:translation", "language:en", "language:it", "license:unknown", "region:us" ]
2022-09-05T12:53:46+00:00
{"language": ["en", "it"], "license": ["unknown"], "multilinguality": ["monolingual", "translation"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "Europarl v7 (en-it split)", "tags": []}
2022-09-07T09:14:04+00:00
[]
[ "en", "it" ]
TAGS #task_categories-translation #multilinguality-monolingual #multilinguality-translation #language-English #language-Italian #license-unknown #region-us
# Dataset Card for Europarl v7 (en-it split) This dataset contains only the English-Italian split of Europarl v7. We created the dataset to provide it to the M2L 2022 Summer School students. For all the information on the dataset, please refer to: URL ## Dataset Structure ### Data Fields - sent_en: English transcript - sent_it: Italian translation ### Data Splits We created three custom training/validation/testing splits. Feel free to rearrange them if needed. These ARE NOT by any means official splits. - train (1717204 pairs) - validation (190911 pairs) - test (1000 pairs) If using the dataset, please cite: 'Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In Proceedings of machine translation summit x: papers (pp. 79-86).' ### Contributions Thanks to @g8a9 for adding this dataset.
[ "# Dataset Card for Europarl v7 (en-it split)\n\nThis dataset contains only the English-Italian split of Europarl v7.\nWe created the dataset to provide it to the M2L 2022 Summer School students.\n\nFor all the information on the dataset, please refer to: URL", "## Dataset Structure", "### Data Fields\n\n- sent_en: English transcript\n- sent_it: Italian translation", "### Data Splits\n\nWe created three custom training/validation/testing splits. Feel free to rearrange them if needed. These ARE NOT by any means official splits. \n\n- train (1717204 pairs)\n- validation (190911 pairs)\n- test (1000 pairs)\n\n\n\nIf using the dataset, please cite:\n\n'Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In Proceedings of machine translation summit x: papers (pp. 79-86).'", "### Contributions\n\nThanks to @g8a9 for adding this dataset." ]
[ "TAGS\n#task_categories-translation #multilinguality-monolingual #multilinguality-translation #language-English #language-Italian #license-unknown #region-us \n", "# Dataset Card for Europarl v7 (en-it split)\n\nThis dataset contains only the English-Italian split of Europarl v7.\nWe created the dataset to provide it to the M2L 2022 Summer School students.\n\nFor all the information on the dataset, please refer to: URL", "## Dataset Structure", "### Data Fields\n\n- sent_en: English transcript\n- sent_it: Italian translation", "### Data Splits\n\nWe created three custom training/validation/testing splits. Feel free to rearrange them if needed. These ARE NOT by any means official splits. \n\n- train (1717204 pairs)\n- validation (190911 pairs)\n- test (1000 pairs)\n\n\n\nIf using the dataset, please cite:\n\n'Koehn, P. (2005). Europarl: A parallel corpus for statistical machine translation. In Proceedings of machine translation summit x: papers (pp. 79-86).'", "### Contributions\n\nThanks to @g8a9 for adding this dataset." ]
ffb979b8a8247b442ec3adcf5fb83d3fff562f55
# Battery Device QA Data Battery device records, including anode, cathode, and electrolyte. Examples of the question answering evaluation dataset: \{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\} \{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\} \{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\} # Usage ``` from datasets import load_dataset dataset = load_dataset("batterydata/battery-device-data-qa") ``` Note: in the original BatteryBERT paper, 272 data records were used for evaluation after removing redundant records as well as paragraphs with character length >= 1500. Code is shown below: ``` import json with open("answers.json", "r", encoding='utf-8') as f: data = json.load(f) evaluation = [] for point in data['data']: paragraphs = point['paragraphs'][0]['context'] if len(paragraphs)<1500: qas = point['paragraphs'][0]['qas'] for indiv in qas: try: question = indiv['question'] answer = indiv['answers'][0]['text'] pairs = (paragraphs, question, answer) evaluation.append(pairs) except: continue ``` # Citation ``` @article{huang2022batterybert, title={BatteryBERT: A Pretrained Language Model for Battery Database Enhancement}, author={Huang, Shu and Cole, Jacqueline M}, journal={J. Chem. Inf. Model.}, year={2022}, doi={10.1021/acs.jcim.2c00035}, url={DOI:10.1021/acs.jcim.2c00035}, pages={DOI: 10.1021/acs.jcim.2c00035}, publisher={ACS Publications} } ```
batterydata/battery-device-data-qa
[ "task_categories:question-answering", "language:en", "license:apache-2.0", "region:us" ]
2022-09-05T14:30:32+00:00
{"language": ["en"], "license": ["apache-2.0"], "task_categories": ["question-answering"], "pretty_name": "Battery Device Question Answering Dataset"}
2023-11-06T12:50:19+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #language-English #license-apache-2.0 #region-us
# Battery Device QA Data Battery device records, including anode, cathode, and electrolyte. Examples of the question answering evaluation dataset: \{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\} \{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\} \{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\} # Usage Note: in the original BatteryBERT paper, 272 data records were used for evaluation after removing redundant records as well as paragraphs with character length >= 1500. Code is shown below:
[ "# Battery Device QA Data\n\nBattery device records, including anode, cathode, and electrolyte.\n\nExamples of the question answering evaluation dataset: \n\n\\{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\\}\n\n\n\\{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\\}\n\n\n\\{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\\}", "# Usage\n\n\nNote: in the original BatteryBERT paper, 272 data records were used for evaluation after removing redundant records as well as paragraphs with character length >= 1500. Code is shown below:" ]
[ "TAGS\n#task_categories-question-answering #language-English #license-apache-2.0 #region-us \n", "# Battery Device QA Data\n\nBattery device records, including anode, cathode, and electrolyte.\n\nExamples of the question answering evaluation dataset: \n\n\\{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight.', 'start index': 645\\}\n\n\n\\{'question': 'What is the anode?', 'answer': 'Cu foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu foil for the anode) and dried at 90 °C under vacuum overnight. Finally, the obtained electrodes were cut into desired shapes on demand. It should be noted that the electrode mass ratio of cathode/anode is set to about 4, thus achieving the battery balance.', 'start index': 673\\}\n\n\n\\{'question': 'What is the cathode?', 'answer': 'SiC/RGO nanocomposite', 'context': 'In conclusion, the SiC/RGO nanocomposite, integrating the synergistic effect of SiC flakes and RGO, was synthesized by an in situ gas–solid fabrication method. Taking advantage of the enhanced photogenerated charge separation, large CO2 adsorption, and numerous exposed active sites, SiC/RGO nanocomposite served as the cathode material for the photo-assisted Li–CO2 battery.', 'start index': 284\\}", "# Usage\n\n\nNote: in the original BatteryBERT paper, 272 data records were used for evaluation after removing redundant records as well as paragraphs with character length >= 1500. Code is shown below:" ]
cbed321f16868443449817bad5f6ef18b64030e7
# diffusers metrics This dataset contains metrics about the huggingface/diffusers package. Number of repositories in the dataset: 160 Number of packages in the dataset: 2 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/diffusers/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![diffusers-dependent package star count](./diffusers-dependents/resolve/main/diffusers-dependent_package_star_count.png) | ![diffusers-dependent repository star count](./diffusers-dependents/resolve/main/diffusers-dependent_repository_star_count.png) There are 0 packages that have more than 1000 stars. There are 3 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [JoaoLages/diffusers-interpret](https://github.com/JoaoLages/diffusers-interpret): 121 [samedii/perceptor](https://github.com/samedii/perceptor): 1 *Repository* [gradio-app/gradio](https://github.com/gradio-app/gradio): 9168 [divamgupta/diffusionbee-stable-diffusion-ui](https://github.com/divamgupta/diffusionbee-stable-diffusion-ui): 4264 [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui): 3527 [bes-dev/stable_diffusion.openvino](https://github.com/bes-dev/stable_diffusion.openvino): 925 [nateraw/stable-diffusion-videos](https://github.com/nateraw/stable-diffusion-videos): 899 [sharonzhou/long_stable_diffusion](https://github.com/sharonzhou/long_stable_diffusion): 360 [Eventual-Inc/Daft](https://github.com/Eventual-Inc/Daft): 251 [JoaoLages/diffusers-interpret](https://github.com/JoaoLages/diffusers-interpret): 121 [GT4SD/gt4sd-core](https://github.com/GT4SD/gt4sd-core): 113 [brycedrennan/imaginAIry](https://github.com/brycedrennan/imaginAIry): 104 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![diffusers-dependent package forks count](./diffusers-dependents/resolve/main/diffusers-dependent_package_forks_count.png) | ![diffusers-dependent repository forks count](./diffusers-dependents/resolve/main/diffusers-dependent_repository_forks_count.png) There are 0 packages that have more than 200 forks. There are 2 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* *Repository* [gradio-app/gradio](https://github.com/gradio-app/gradio): 574 [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui): 377 [bes-dev/stable_diffusion.openvino](https://github.com/bes-dev/stable_diffusion.openvino): 108 [divamgupta/diffusionbee-stable-diffusion-ui](https://github.com/divamgupta/diffusionbee-stable-diffusion-ui): 96 [nateraw/stable-diffusion-videos](https://github.com/nateraw/stable-diffusion-videos): 73 [GT4SD/gt4sd-core](https://github.com/GT4SD/gt4sd-core): 34 [sharonzhou/long_stable_diffusion](https://github.com/sharonzhou/long_stable_diffusion): 29 [coreweave/kubernetes-cloud](https://github.com/coreweave/kubernetes-cloud): 20 [bananaml/serverless-template-stable-diffusion](https://github.com/bananaml/serverless-template-stable-diffusion): 15 [AmericanPresidentJimmyCarter/yasd-discord-bot](https://github.com/AmericanPresidentJimmyCarter/yasd-discord-bot): 9 [NickLucche/stable-diffusion-nvidia-docker](https://github.com/NickLucche/stable-diffusion-nvidia-docker): 9 [vopani/waveton](https://github.com/vopani/waveton): 9 [harubaru/discord-stable-diffusion](https://github.com/harubaru/discord-stable-diffusion): 9
open-source-metrics/diffusers-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:31:32+00:00
{"license": "apache-2.0", "pretty_name": "diffusers metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 2680, "num_examples": 62}, {"name": "repository", "num_bytes": 92837, "num_examples": 1976}], "download_size": 55374, "dataset_size": 95517}}
2024-02-16T22:46:05+00:00
[]
[]
TAGS #license-apache-2.0 #github-stars #region-us
diffusers metrics ================= This dataset contains metrics about the huggingface/diffusers package. Number of repositories in the dataset: 160 Number of packages in the dataset: 2 Package dependents ------------------ This contains the data available in the used-by tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. There are 0 packages that have more than 1000 stars. There are 3 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* JoaoLages/diffusers-interpret: 121 samedii/perceptor: 1 *Repository* gradio-app/gradio: 9168 divamgupta/diffusionbee-stable-diffusion-ui: 4264 AUTOMATIC1111/stable-diffusion-webui: 3527 bes-dev/stable\_diffusion.openvino: 925 nateraw/stable-diffusion-videos: 899 sharonzhou/long\_stable\_diffusion: 360 Eventual-Inc/Daft: 251 JoaoLages/diffusers-interpret: 121 GT4SD/gt4sd-core: 113 brycedrennan/imaginAIry: 104 ### Package & Repository fork count This section shows the package and repository fork count, individually. There are 0 packages that have more than 200 forks. There are 2 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* *Repository* gradio-app/gradio: 574 AUTOMATIC1111/stable-diffusion-webui: 377 bes-dev/stable\_diffusion.openvino: 108 divamgupta/diffusionbee-stable-diffusion-ui: 96 nateraw/stable-diffusion-videos: 73 GT4SD/gt4sd-core: 34 sharonzhou/long\_stable\_diffusion: 29 coreweave/kubernetes-cloud: 20 bananaml/serverless-template-stable-diffusion: 15 AmericanPresidentJimmyCarter/yasd-discord-bot: 9 NickLucche/stable-diffusion-nvidia-docker: 9 vopani/waveton: 9 harubaru/discord-stable-diffusion: 9
[ "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 0 packages that have more than 1000 stars.\n\n\nThere are 3 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nJoaoLages/diffusers-interpret: 121\n\n\nsamedii/perceptor: 1\n\n\n*Repository*\n\n\ngradio-app/gradio: 9168\n\n\ndivamgupta/diffusionbee-stable-diffusion-ui: 4264\n\n\nAUTOMATIC1111/stable-diffusion-webui: 3527\n\n\nbes-dev/stable\\_diffusion.openvino: 925\n\n\nnateraw/stable-diffusion-videos: 899\n\n\nsharonzhou/long\\_stable\\_diffusion: 360\n\n\nEventual-Inc/Daft: 251\n\n\nJoaoLages/diffusers-interpret: 121\n\n\nGT4SD/gt4sd-core: 113\n\n\nbrycedrennan/imaginAIry: 104", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 0 packages that have more than 200 forks.\n\n\nThere are 2 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\n*Repository*\n\n\ngradio-app/gradio: 574\n\n\nAUTOMATIC1111/stable-diffusion-webui: 377\n\n\nbes-dev/stable\\_diffusion.openvino: 108\n\n\ndivamgupta/diffusionbee-stable-diffusion-ui: 96\n\n\nnateraw/stable-diffusion-videos: 73\n\n\nGT4SD/gt4sd-core: 34\n\n\nsharonzhou/long\\_stable\\_diffusion: 29\n\n\ncoreweave/kubernetes-cloud: 20\n\n\nbananaml/serverless-template-stable-diffusion: 15\n\n\nAmericanPresidentJimmyCarter/yasd-discord-bot: 9\n\n\nNickLucche/stable-diffusion-nvidia-docker: 9\n\n\nvopani/waveton: 9\n\n\nharubaru/discord-stable-diffusion: 9" ]
[ "TAGS\n#license-apache-2.0 #github-stars #region-us \n", "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 0 packages that have more than 1000 stars.\n\n\nThere are 3 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nJoaoLages/diffusers-interpret: 121\n\n\nsamedii/perceptor: 1\n\n\n*Repository*\n\n\ngradio-app/gradio: 9168\n\n\ndivamgupta/diffusionbee-stable-diffusion-ui: 4264\n\n\nAUTOMATIC1111/stable-diffusion-webui: 3527\n\n\nbes-dev/stable\\_diffusion.openvino: 925\n\n\nnateraw/stable-diffusion-videos: 899\n\n\nsharonzhou/long\\_stable\\_diffusion: 360\n\n\nEventual-Inc/Daft: 251\n\n\nJoaoLages/diffusers-interpret: 121\n\n\nGT4SD/gt4sd-core: 113\n\n\nbrycedrennan/imaginAIry: 104", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 0 packages that have more than 200 forks.\n\n\nThere are 2 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\n*Repository*\n\n\ngradio-app/gradio: 574\n\n\nAUTOMATIC1111/stable-diffusion-webui: 377\n\n\nbes-dev/stable\\_diffusion.openvino: 108\n\n\ndivamgupta/diffusionbee-stable-diffusion-ui: 96\n\n\nnateraw/stable-diffusion-videos: 73\n\n\nGT4SD/gt4sd-core: 34\n\n\nsharonzhou/long\\_stable\\_diffusion: 29\n\n\ncoreweave/kubernetes-cloud: 20\n\n\nbananaml/serverless-template-stable-diffusion: 15\n\n\nAmericanPresidentJimmyCarter/yasd-discord-bot: 9\n\n\nNickLucche/stable-diffusion-nvidia-docker: 9\n\n\nvopani/waveton: 9\n\n\nharubaru/discord-stable-diffusion: 9" ]
91df9fbf9146c843ed3ab32c72fa64ba6b34a28f
# accelerate metrics This dataset contains metrics about the huggingface/accelerate package. Number of repositories in the dataset: 727 Number of packages in the dataset: 37 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/accelerate/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![accelerate-dependent package star count](./accelerate-dependents/resolve/main/accelerate-dependent_package_star_count.png) | ![accelerate-dependent repository star count](./accelerate-dependents/resolve/main/accelerate-dependent_repository_star_count.png) There are 10 packages that have more than 1000 stars. There are 16 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 70480 [fastai/fastai](https://github.com/fastai/fastai): 22774 [lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 7674 [kornia/kornia](https://github.com/kornia/kornia): 7103 [facebookresearch/pytorch3d](https://github.com/facebookresearch/pytorch3d): 6548 [huggingface/diffusers](https://github.com/huggingface/diffusers): 5457 [lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 5113 [catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 2985 [lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch): 1727 [abhishekkrthakur/tez](https://github.com/abhishekkrthakur/tez): 1101 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70480 [google-research/google-research](https://github.com/google-research/google-research): 25092 [ray-project/ray](https://github.com/ray-project/ray): 22047 [lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 7674 [kornia/kornia](https://github.com/kornia/kornia): 7103 [huggingface/diffusers](https://github.com/huggingface/diffusers): 5457 [lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 5113 [wandb/wandb](https://github.com/wandb/wandb): 4738 [skorch-dev/skorch](https://github.com/skorch-dev/skorch): 4679 [catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 2985 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![accelerate-dependent package forks count](./accelerate-dependents/resolve/main/accelerate-dependent_package_forks_count.png) | ![accelerate-dependent repository forks count](./accelerate-dependents/resolve/main/accelerate-dependent_repository_forks_count.png) There are 9 packages that have more than 200 forks. There are 16 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [fastai/fastai](https://github.com/fastai/fastai): 7297 [facebookresearch/pytorch3d](https://github.com/facebookresearch/pytorch3d): 975 [kornia/kornia](https://github.com/kornia/kornia): 723 [lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 582 [huggingface/diffusers](https://github.com/huggingface/diffusers): 490 [lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 412 [catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 366 [lucidrains/denoising-diffusion-pytorch](https://github.com/lucidrains/denoising-diffusion-pytorch): 235 [abhishekkrthakur/tez](https://github.com/abhishekkrthakur/tez): 136 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [google-research/google-research](https://github.com/google-research/google-research): 6139 [ray-project/ray](https://github.com/ray-project/ray): 3876 [roatienza/Deep-Learning-Experiments](https://github.com/roatienza/Deep-Learning-Experiments): 729 [kornia/kornia](https://github.com/kornia/kornia): 723 [lucidrains/DALLE2-pytorch](https://github.com/lucidrains/DALLE2-pytorch): 582 [huggingface/diffusers](https://github.com/huggingface/diffusers): 490 [nlp-with-transformers/notebooks](https://github.com/nlp-with-transformers/notebooks): 436 [lucidrains/imagen-pytorch](https://github.com/lucidrains/imagen-pytorch): 412 [catalyst-team/catalyst](https://github.com/catalyst-team/catalyst): 366
open-source-metrics/accelerate-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:32:37+00:00
{"license": "apache-2.0", "pretty_name": "accelerate metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 4874, "num_examples": 116}, {"name": "repository", "num_bytes": 162350, "num_examples": 3488}], "download_size": 100048, "dataset_size": 167224}}
2024-02-16T19:02:17+00:00
[]
[]
TAGS #license-apache-2.0 #github-stars #region-us
accelerate metrics ================== This dataset contains metrics about the huggingface/accelerate package. Number of repositories in the dataset: 727 Number of packages in the dataset: 37 Package dependents ------------------ This contains the data available in the used-by tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. There are 10 packages that have more than 1000 stars. There are 16 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* huggingface/transformers: 70480 fastai/fastai: 22774 lucidrains/DALLE2-pytorch: 7674 kornia/kornia: 7103 facebookresearch/pytorch3d: 6548 huggingface/diffusers: 5457 lucidrains/imagen-pytorch: 5113 catalyst-team/catalyst: 2985 lucidrains/denoising-diffusion-pytorch: 1727 abhishekkrthakur/tez: 1101 *Repository* huggingface/transformers: 70480 google-research/google-research: 25092 ray-project/ray: 22047 lucidrains/DALLE2-pytorch: 7674 kornia/kornia: 7103 huggingface/diffusers: 5457 lucidrains/imagen-pytorch: 5113 wandb/wandb: 4738 skorch-dev/skorch: 4679 catalyst-team/catalyst: 2985 ### Package & Repository fork count This section shows the package and repository fork count, individually. There are 9 packages that have more than 200 forks. There are 16 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* huggingface/transformers: 16157 fastai/fastai: 7297 facebookresearch/pytorch3d: 975 kornia/kornia: 723 lucidrains/DALLE2-pytorch: 582 huggingface/diffusers: 490 lucidrains/imagen-pytorch: 412 catalyst-team/catalyst: 366 lucidrains/denoising-diffusion-pytorch: 235 abhishekkrthakur/tez: 136 *Repository* huggingface/transformers: 16157 google-research/google-research: 6139 ray-project/ray: 3876 roatienza/Deep-Learning-Experiments: 729 kornia/kornia: 723 lucidrains/DALLE2-pytorch: 582 huggingface/diffusers: 490 nlp-with-transformers/notebooks: 436 lucidrains/imagen-pytorch: 412 catalyst-team/catalyst: 366
[ "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 10 packages that have more than 1000 stars.\n\n\nThere are 16 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 70480\n\n\nfastai/fastai: 22774\n\n\nlucidrains/DALLE2-pytorch: 7674\n\n\nkornia/kornia: 7103\n\n\nfacebookresearch/pytorch3d: 6548\n\n\nhuggingface/diffusers: 5457\n\n\nlucidrains/imagen-pytorch: 5113\n\n\ncatalyst-team/catalyst: 2985\n\n\nlucidrains/denoising-diffusion-pytorch: 1727\n\n\nabhishekkrthakur/tez: 1101\n\n\n*Repository*\n\n\nhuggingface/transformers: 70480\n\n\ngoogle-research/google-research: 25092\n\n\nray-project/ray: 22047\n\n\nlucidrains/DALLE2-pytorch: 7674\n\n\nkornia/kornia: 7103\n\n\nhuggingface/diffusers: 5457\n\n\nlucidrains/imagen-pytorch: 5113\n\n\nwandb/wandb: 4738\n\n\nskorch-dev/skorch: 4679\n\n\ncatalyst-team/catalyst: 2985", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 9 packages that have more than 200 forks.\n\n\nThere are 16 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 16157\n\n\nfastai/fastai: 7297\n\n\nfacebookresearch/pytorch3d: 975\n\n\nkornia/kornia: 723\n\n\nlucidrains/DALLE2-pytorch: 582\n\n\nhuggingface/diffusers: 490\n\n\nlucidrains/imagen-pytorch: 412\n\n\ncatalyst-team/catalyst: 366\n\n\nlucidrains/denoising-diffusion-pytorch: 235\n\n\nabhishekkrthakur/tez: 136\n\n\n*Repository*\n\n\nhuggingface/transformers: 16157\n\n\ngoogle-research/google-research: 6139\n\n\nray-project/ray: 3876\n\n\nroatienza/Deep-Learning-Experiments: 729\n\n\nkornia/kornia: 723\n\n\nlucidrains/DALLE2-pytorch: 582\n\n\nhuggingface/diffusers: 490\n\n\nnlp-with-transformers/notebooks: 436\n\n\nlucidrains/imagen-pytorch: 412\n\n\ncatalyst-team/catalyst: 366" ]
[ "TAGS\n#license-apache-2.0 #github-stars #region-us \n", "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 10 packages that have more than 1000 stars.\n\n\nThere are 16 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 70480\n\n\nfastai/fastai: 22774\n\n\nlucidrains/DALLE2-pytorch: 7674\n\n\nkornia/kornia: 7103\n\n\nfacebookresearch/pytorch3d: 6548\n\n\nhuggingface/diffusers: 5457\n\n\nlucidrains/imagen-pytorch: 5113\n\n\ncatalyst-team/catalyst: 2985\n\n\nlucidrains/denoising-diffusion-pytorch: 1727\n\n\nabhishekkrthakur/tez: 1101\n\n\n*Repository*\n\n\nhuggingface/transformers: 70480\n\n\ngoogle-research/google-research: 25092\n\n\nray-project/ray: 22047\n\n\nlucidrains/DALLE2-pytorch: 7674\n\n\nkornia/kornia: 7103\n\n\nhuggingface/diffusers: 5457\n\n\nlucidrains/imagen-pytorch: 5113\n\n\nwandb/wandb: 4738\n\n\nskorch-dev/skorch: 4679\n\n\ncatalyst-team/catalyst: 2985", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 9 packages that have more than 200 forks.\n\n\nThere are 16 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 16157\n\n\nfastai/fastai: 7297\n\n\nfacebookresearch/pytorch3d: 975\n\n\nkornia/kornia: 723\n\n\nlucidrains/DALLE2-pytorch: 582\n\n\nhuggingface/diffusers: 490\n\n\nlucidrains/imagen-pytorch: 412\n\n\ncatalyst-team/catalyst: 366\n\n\nlucidrains/denoising-diffusion-pytorch: 235\n\n\nabhishekkrthakur/tez: 136\n\n\n*Repository*\n\n\nhuggingface/transformers: 16157\n\n\ngoogle-research/google-research: 6139\n\n\nray-project/ray: 3876\n\n\nroatienza/Deep-Learning-Experiments: 729\n\n\nkornia/kornia: 723\n\n\nlucidrains/DALLE2-pytorch: 582\n\n\nhuggingface/diffusers: 490\n\n\nnlp-with-transformers/notebooks: 436\n\n\nlucidrains/imagen-pytorch: 412\n\n\ncatalyst-team/catalyst: 366" ]
7fb91ea38e6b089b6488c0648b92a9f80f5f6594
# evaluate metrics This dataset contains metrics about the huggingface/evaluate package. Number of repositories in the dataset: 106 Number of packages in the dataset: 3 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/evaluate/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![evaluate-dependent package star count](./evaluate-dependents/resolve/main/evaluate-dependent_package_star_count.png) | ![evaluate-dependent repository star count](./evaluate-dependents/resolve/main/evaluate-dependent_repository_star_count.png) There are 1 packages that have more than 1000 stars. There are 2 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [huggingface/accelerate](https://github.com/huggingface/accelerate): 2884 [fcakyon/video-transformers](https://github.com/fcakyon/video-transformers): 4 [entelecheia/ekorpkit](https://github.com/entelecheia/ekorpkit): 2 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70481 [huggingface/accelerate](https://github.com/huggingface/accelerate): 2884 [huggingface/evaluate](https://github.com/huggingface/evaluate): 878 [pytorch/benchmark](https://github.com/pytorch/benchmark): 406 [imhuay/studies](https://github.com/imhuay/studies): 161 [AIRC-KETI/ke-t5](https://github.com/AIRC-KETI/ke-t5): 128 [Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci): 32 [philschmid/optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization): 20 [hms-dbmi/scw](https://github.com/hms-dbmi/scw): 19 [philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 15 [girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 15 [lewtun/dl4phys](https://github.com/lewtun/dl4phys): 15 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![evaluate-dependent package forks count](./evaluate-dependents/resolve/main/evaluate-dependent_package_forks_count.png) | ![evaluate-dependent repository forks count](./evaluate-dependents/resolve/main/evaluate-dependent_repository_forks_count.png) There are 1 packages that have more than 200 forks. There are 2 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [huggingface/accelerate](https://github.com/huggingface/accelerate): 224 [fcakyon/video-transformers](https://github.com/fcakyon/video-transformers): 0 [entelecheia/ekorpkit](https://github.com/entelecheia/ekorpkit): 0 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [huggingface/accelerate](https://github.com/huggingface/accelerate): 224 [pytorch/benchmark](https://github.com/pytorch/benchmark): 131 [Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci): 67 [huggingface/evaluate](https://github.com/huggingface/evaluate): 48 [imhuay/studies](https://github.com/imhuay/studies): 42 [AIRC-KETI/ke-t5](https://github.com/AIRC-KETI/ke-t5): 14 [girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 14 [hms-dbmi/scw](https://github.com/hms-dbmi/scw): 11 [kili-technology/automl](https://github.com/kili-technology/automl): 5 [whatofit/LevelWordWithFreq](https://github.com/whatofit/LevelWordWithFreq): 5
open-source-metrics/evaluate-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:33:19+00:00
{"license": "apache-2.0", "pretty_name": "evaluate metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 1830, "num_examples": 45}, {"name": "repository", "num_bytes": 54734, "num_examples": 1161}], "download_size": 37570, "dataset_size": 56564}}
2024-02-16T18:19:33+00:00
[]
[]
TAGS #license-apache-2.0 #github-stars #region-us
evaluate metrics ================ This dataset contains metrics about the huggingface/evaluate package. Number of repositories in the dataset: 106 Number of packages in the dataset: 3 Package dependents ------------------ This contains the data available in the used-by tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. There are 1 packages that have more than 1000 stars. There are 2 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* huggingface/accelerate: 2884 fcakyon/video-transformers: 4 entelecheia/ekorpkit: 2 *Repository* huggingface/transformers: 70481 huggingface/accelerate: 2884 huggingface/evaluate: 878 pytorch/benchmark: 406 imhuay/studies: 161 AIRC-KETI/ke-t5: 128 Jaseci-Labs/jaseci: 32 philschmid/optimum-static-quantization: 20 hms-dbmi/scw: 19 philschmid/optimum-transformers-optimizations: 15 girafe-ai/msai-python: 15 lewtun/dl4phys: 15 ### Package & Repository fork count This section shows the package and repository fork count, individually. There are 1 packages that have more than 200 forks. There are 2 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* huggingface/accelerate: 224 fcakyon/video-transformers: 0 entelecheia/ekorpkit: 0 *Repository* huggingface/transformers: 16157 huggingface/accelerate: 224 pytorch/benchmark: 131 Jaseci-Labs/jaseci: 67 huggingface/evaluate: 48 imhuay/studies: 42 AIRC-KETI/ke-t5: 14 girafe-ai/msai-python: 14 hms-dbmi/scw: 11 kili-technology/automl: 5 whatofit/LevelWordWithFreq: 5
[ "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 1 packages that have more than 1000 stars.\n\n\nThere are 2 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/accelerate: 2884\n\n\nfcakyon/video-transformers: 4\n\n\nentelecheia/ekorpkit: 2\n\n\n*Repository*\n\n\nhuggingface/transformers: 70481\n\n\nhuggingface/accelerate: 2884\n\n\nhuggingface/evaluate: 878\n\n\npytorch/benchmark: 406\n\n\nimhuay/studies: 161\n\n\nAIRC-KETI/ke-t5: 128\n\n\nJaseci-Labs/jaseci: 32\n\n\nphilschmid/optimum-static-quantization: 20\n\n\nhms-dbmi/scw: 19\n\n\nphilschmid/optimum-transformers-optimizations: 15\n\n\ngirafe-ai/msai-python: 15\n\n\nlewtun/dl4phys: 15", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 1 packages that have more than 200 forks.\n\n\nThere are 2 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/accelerate: 224\n\n\nfcakyon/video-transformers: 0\n\n\nentelecheia/ekorpkit: 0\n\n\n*Repository*\n\n\nhuggingface/transformers: 16157\n\n\nhuggingface/accelerate: 224\n\n\npytorch/benchmark: 131\n\n\nJaseci-Labs/jaseci: 67\n\n\nhuggingface/evaluate: 48\n\n\nimhuay/studies: 42\n\n\nAIRC-KETI/ke-t5: 14\n\n\ngirafe-ai/msai-python: 14\n\n\nhms-dbmi/scw: 11\n\n\nkili-technology/automl: 5\n\n\nwhatofit/LevelWordWithFreq: 5" ]
[ "TAGS\n#license-apache-2.0 #github-stars #region-us \n", "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 1 packages that have more than 1000 stars.\n\n\nThere are 2 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/accelerate: 2884\n\n\nfcakyon/video-transformers: 4\n\n\nentelecheia/ekorpkit: 2\n\n\n*Repository*\n\n\nhuggingface/transformers: 70481\n\n\nhuggingface/accelerate: 2884\n\n\nhuggingface/evaluate: 878\n\n\npytorch/benchmark: 406\n\n\nimhuay/studies: 161\n\n\nAIRC-KETI/ke-t5: 128\n\n\nJaseci-Labs/jaseci: 32\n\n\nphilschmid/optimum-static-quantization: 20\n\n\nhms-dbmi/scw: 19\n\n\nphilschmid/optimum-transformers-optimizations: 15\n\n\ngirafe-ai/msai-python: 15\n\n\nlewtun/dl4phys: 15", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 1 packages that have more than 200 forks.\n\n\nThere are 2 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/accelerate: 224\n\n\nfcakyon/video-transformers: 0\n\n\nentelecheia/ekorpkit: 0\n\n\n*Repository*\n\n\nhuggingface/transformers: 16157\n\n\nhuggingface/accelerate: 224\n\n\npytorch/benchmark: 131\n\n\nJaseci-Labs/jaseci: 67\n\n\nhuggingface/evaluate: 48\n\n\nimhuay/studies: 42\n\n\nAIRC-KETI/ke-t5: 14\n\n\ngirafe-ai/msai-python: 14\n\n\nhms-dbmi/scw: 11\n\n\nkili-technology/automl: 5\n\n\nwhatofit/LevelWordWithFreq: 5" ]
a70617c7ceb76742b60748626733a425d6aad03a
# optimum metrics This dataset contains metrics about the huggingface/optimum package. Number of repositories in the dataset: 19 Number of packages in the dataset: 6 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/optimum/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![optimum-dependent package star count](./optimum-dependents/resolve/main/optimum-dependent_package_star_count.png) | ![optimum-dependent repository star count](./optimum-dependents/resolve/main/optimum-dependent_repository_star_count.png) There are 0 packages that have more than 1000 stars. There are 0 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 288 [AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 114 [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 61 [huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 34 [huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 24 [bhavsarpratik/easy-transformers](https://github.com/bhavsarpratik/easy-transformers): 10 *Repository* [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 288 [marqo-ai/marqo](https://github.com/marqo-ai/marqo): 265 [AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 114 [graphcore/tutorials](https://github.com/graphcore/tutorials): 65 [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 61 [huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 34 [huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 24 [philschmid/optimum-static-quantization](https://github.com/philschmid/optimum-static-quantization): 20 [philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 15 [girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 15 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![optimum-dependent package forks count](./optimum-dependents/resolve/main/optimum-dependent_package_forks_count.png) | ![optimum-dependent repository forks count](./optimum-dependents/resolve/main/optimum-dependent_repository_forks_count.png) There are 0 packages that have more than 200 forks. There are 0 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 82 [huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 18 [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 10 [AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 6 [huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 3 [bhavsarpratik/easy-transformers](https://github.com/bhavsarpratik/easy-transformers): 2 *Repository* [SeldonIO/MLServer](https://github.com/SeldonIO/MLServer): 82 [graphcore/tutorials](https://github.com/graphcore/tutorials): 33 [huggingface/optimum-graphcore](https://github.com/huggingface/optimum-graphcore): 18 [girafe-ai/msai-python](https://github.com/girafe-ai/msai-python): 14 [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel): 10 [marqo-ai/marqo](https://github.com/marqo-ai/marqo): 6 [AlekseyKorshuk/optimum-transformers](https://github.com/AlekseyKorshuk/optimum-transformers): 6 [whatofit/LevelWordWithFreq](https://github.com/whatofit/LevelWordWithFreq): 5 [philschmid/optimum-transformers-optimizations](https://github.com/philschmid/optimum-transformers-optimizations): 3 [huggingface/optimum-habana](https://github.com/huggingface/optimum-habana): 3
open-source-metrics/optimum-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:33:37+00:00
{"license": "apache-2.0", "pretty_name": "optimum metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 555, "num_examples": 13}, {"name": "repository", "num_bytes": 3790, "num_examples": 81}], "download_size": 6617, "dataset_size": 4345}}
2024-02-16T20:08:08+00:00
[]
[]
TAGS #license-apache-2.0 #github-stars #region-us
optimum metrics =============== This dataset contains metrics about the huggingface/optimum package. Number of repositories in the dataset: 19 Number of packages in the dataset: 6 Package dependents ------------------ This contains the data available in the used-by tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. There are 0 packages that have more than 1000 stars. There are 0 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* SeldonIO/MLServer: 288 AlekseyKorshuk/optimum-transformers: 114 huggingface/optimum-intel: 61 huggingface/optimum-graphcore: 34 huggingface/optimum-habana: 24 bhavsarpratik/easy-transformers: 10 *Repository* SeldonIO/MLServer: 288 marqo-ai/marqo: 265 AlekseyKorshuk/optimum-transformers: 114 graphcore/tutorials: 65 huggingface/optimum-intel: 61 huggingface/optimum-graphcore: 34 huggingface/optimum-habana: 24 philschmid/optimum-static-quantization: 20 philschmid/optimum-transformers-optimizations: 15 girafe-ai/msai-python: 15 ### Package & Repository fork count This section shows the package and repository fork count, individually. There are 0 packages that have more than 200 forks. There are 0 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* SeldonIO/MLServer: 82 huggingface/optimum-graphcore: 18 huggingface/optimum-intel: 10 AlekseyKorshuk/optimum-transformers: 6 huggingface/optimum-habana: 3 bhavsarpratik/easy-transformers: 2 *Repository* SeldonIO/MLServer: 82 graphcore/tutorials: 33 huggingface/optimum-graphcore: 18 girafe-ai/msai-python: 14 huggingface/optimum-intel: 10 marqo-ai/marqo: 6 AlekseyKorshuk/optimum-transformers: 6 whatofit/LevelWordWithFreq: 5 philschmid/optimum-transformers-optimizations: 3 huggingface/optimum-habana: 3
[ "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 0 packages that have more than 1000 stars.\n\n\nThere are 0 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nSeldonIO/MLServer: 288\n\n\nAlekseyKorshuk/optimum-transformers: 114\n\n\nhuggingface/optimum-intel: 61\n\n\nhuggingface/optimum-graphcore: 34\n\n\nhuggingface/optimum-habana: 24\n\n\nbhavsarpratik/easy-transformers: 10\n\n\n*Repository*\n\n\nSeldonIO/MLServer: 288\n\n\nmarqo-ai/marqo: 265\n\n\nAlekseyKorshuk/optimum-transformers: 114\n\n\ngraphcore/tutorials: 65\n\n\nhuggingface/optimum-intel: 61\n\n\nhuggingface/optimum-graphcore: 34\n\n\nhuggingface/optimum-habana: 24\n\n\nphilschmid/optimum-static-quantization: 20\n\n\nphilschmid/optimum-transformers-optimizations: 15\n\n\ngirafe-ai/msai-python: 15", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 0 packages that have more than 200 forks.\n\n\nThere are 0 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nSeldonIO/MLServer: 82\n\n\nhuggingface/optimum-graphcore: 18\n\n\nhuggingface/optimum-intel: 10\n\n\nAlekseyKorshuk/optimum-transformers: 6\n\n\nhuggingface/optimum-habana: 3\n\n\nbhavsarpratik/easy-transformers: 2\n\n\n*Repository*\n\n\nSeldonIO/MLServer: 82\n\n\ngraphcore/tutorials: 33\n\n\nhuggingface/optimum-graphcore: 18\n\n\ngirafe-ai/msai-python: 14\n\n\nhuggingface/optimum-intel: 10\n\n\nmarqo-ai/marqo: 6\n\n\nAlekseyKorshuk/optimum-transformers: 6\n\n\nwhatofit/LevelWordWithFreq: 5\n\n\nphilschmid/optimum-transformers-optimizations: 3\n\n\nhuggingface/optimum-habana: 3" ]
[ "TAGS\n#license-apache-2.0 #github-stars #region-us \n", "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 0 packages that have more than 1000 stars.\n\n\nThere are 0 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nSeldonIO/MLServer: 288\n\n\nAlekseyKorshuk/optimum-transformers: 114\n\n\nhuggingface/optimum-intel: 61\n\n\nhuggingface/optimum-graphcore: 34\n\n\nhuggingface/optimum-habana: 24\n\n\nbhavsarpratik/easy-transformers: 10\n\n\n*Repository*\n\n\nSeldonIO/MLServer: 288\n\n\nmarqo-ai/marqo: 265\n\n\nAlekseyKorshuk/optimum-transformers: 114\n\n\ngraphcore/tutorials: 65\n\n\nhuggingface/optimum-intel: 61\n\n\nhuggingface/optimum-graphcore: 34\n\n\nhuggingface/optimum-habana: 24\n\n\nphilschmid/optimum-static-quantization: 20\n\n\nphilschmid/optimum-transformers-optimizations: 15\n\n\ngirafe-ai/msai-python: 15", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 0 packages that have more than 200 forks.\n\n\nThere are 0 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nSeldonIO/MLServer: 82\n\n\nhuggingface/optimum-graphcore: 18\n\n\nhuggingface/optimum-intel: 10\n\n\nAlekseyKorshuk/optimum-transformers: 6\n\n\nhuggingface/optimum-habana: 3\n\n\nbhavsarpratik/easy-transformers: 2\n\n\n*Repository*\n\n\nSeldonIO/MLServer: 82\n\n\ngraphcore/tutorials: 33\n\n\nhuggingface/optimum-graphcore: 18\n\n\ngirafe-ai/msai-python: 14\n\n\nhuggingface/optimum-intel: 10\n\n\nmarqo-ai/marqo: 6\n\n\nAlekseyKorshuk/optimum-transformers: 6\n\n\nwhatofit/LevelWordWithFreq: 5\n\n\nphilschmid/optimum-transformers-optimizations: 3\n\n\nhuggingface/optimum-habana: 3" ]
3baed3ff5e5357ef7362130470d47ca0fb92f29b
# tokenizers metrics This dataset contains metrics about the huggingface/tokenizers package. Number of repositories in the dataset: 11460 Number of packages in the dataset: 124 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/tokenizers/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![tokenizers-dependent package star count](./tokenizers-dependents/resolve/main/tokenizers-dependent_package_star_count.png) | ![tokenizers-dependent repository star count](./tokenizers-dependents/resolve/main/tokenizers-dependent_repository_star_count.png) There are 14 packages that have more than 1000 stars. There are 41 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 70475 [hankcs/HanLP](https://github.com/hankcs/HanLP): 26958 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9439 [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 8461 [lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch): 4816 [ThilinaRajapakse/simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers): 3303 [neuml/txtai](https://github.com/neuml/txtai): 2530 [QData/TextAttack](https://github.com/QData/TextAttack): 2087 [lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 1981 [utterworks/fast-bert](https://github.com/utterworks/fast-bert): 1760 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70480 [hankcs/HanLP](https://github.com/hankcs/HanLP): 26958 [RasaHQ/rasa](https://github.com/RasaHQ/rasa): 14842 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9440 [gradio-app/gradio](https://github.com/gradio-app/gradio): 9169 [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 8462 [microsoft/unilm](https://github.com/microsoft/unilm): 6650 [EleutherAI/gpt-neo](https://github.com/EleutherAI/gpt-neo): 6431 [moyix/fauxpilot](https://github.com/moyix/fauxpilot): 6300 [lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch): 4816 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![tokenizers-dependent package forks count](./tokenizers-dependents/resolve/main/tokenizers-dependent_package_forks_count.png) | ![tokenizers-dependent repository forks count](./tokenizers-dependents/resolve/main/tokenizers-dependent_repository_forks_count.png) There are 11 packages that have more than 200 forks. There are 39 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 16158 [hankcs/HanLP](https://github.com/hankcs/HanLP): 7388 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920 [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 1695 [ThilinaRajapakse/simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers): 658 [lucidrains/DALLE-pytorch](https://github.com/lucidrains/DALLE-pytorch): 543 [utterworks/fast-bert](https://github.com/utterworks/fast-bert): 336 [nyu-mll/jiant](https://github.com/nyu-mll/jiant): 273 [QData/TextAttack](https://github.com/QData/TextAttack): 269 [lukas-blecher/LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR): 245 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [hankcs/HanLP](https://github.com/hankcs/HanLP): 7388 [RasaHQ/rasa](https://github.com/RasaHQ/rasa): 4105 [plotly/dash-sample-apps](https://github.com/plotly/dash-sample-apps): 2795 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920 [UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers): 1695 [microsoft/unilm](https://github.com/microsoft/unilm): 1223 [openvinotoolkit/open_model_zoo](https://github.com/openvinotoolkit/open_model_zoo): 1207 [bhaveshlohana/HacktoberFest2020-Contributions](https://github.com/bhaveshlohana/HacktoberFest2020-Contributions): 1020 [data-science-on-aws/data-science-on-aws](https://github.com/data-science-on-aws/data-science-on-aws): 884
open-source-metrics/tokenizers-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:34:23+00:00
{"license": "apache-2.0", "pretty_name": "tokenizers metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 95, "num_examples": 2}, {"name": "repository", "num_bytes": 1893, "num_examples": 42}], "download_size": 5046, "dataset_size": 1988}}
2024-02-16T22:31:58+00:00
[]
[]
TAGS #license-apache-2.0 #github-stars #region-us
tokenizers metrics ================== This dataset contains metrics about the huggingface/tokenizers package. Number of repositories in the dataset: 11460 Number of packages in the dataset: 124 Package dependents ------------------ This contains the data available in the used-by tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. There are 14 packages that have more than 1000 stars. There are 41 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* huggingface/transformers: 70475 hankcs/HanLP: 26958 facebookresearch/ParlAI: 9439 UKPLab/sentence-transformers: 8461 lucidrains/DALLE-pytorch: 4816 ThilinaRajapakse/simpletransformers: 3303 neuml/txtai: 2530 QData/TextAttack: 2087 lukas-blecher/LaTeX-OCR: 1981 utterworks/fast-bert: 1760 *Repository* huggingface/transformers: 70480 hankcs/HanLP: 26958 RasaHQ/rasa: 14842 facebookresearch/ParlAI: 9440 gradio-app/gradio: 9169 UKPLab/sentence-transformers: 8462 microsoft/unilm: 6650 EleutherAI/gpt-neo: 6431 moyix/fauxpilot: 6300 lucidrains/DALLE-pytorch: 4816 ### Package & Repository fork count This section shows the package and repository fork count, individually. There are 11 packages that have more than 200 forks. There are 39 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* huggingface/transformers: 16158 hankcs/HanLP: 7388 facebookresearch/ParlAI: 1920 UKPLab/sentence-transformers: 1695 ThilinaRajapakse/simpletransformers: 658 lucidrains/DALLE-pytorch: 543 utterworks/fast-bert: 336 nyu-mll/jiant: 273 QData/TextAttack: 269 lukas-blecher/LaTeX-OCR: 245 *Repository* huggingface/transformers: 16157 hankcs/HanLP: 7388 RasaHQ/rasa: 4105 plotly/dash-sample-apps: 2795 facebookresearch/ParlAI: 1920 UKPLab/sentence-transformers: 1695 microsoft/unilm: 1223 openvinotoolkit/open\_model\_zoo: 1207 bhaveshlohana/HacktoberFest2020-Contributions: 1020 data-science-on-aws/data-science-on-aws: 884
[ "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 14 packages that have more than 1000 stars.\n\n\nThere are 41 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 70475\n\n\nhankcs/HanLP: 26958\n\n\nfacebookresearch/ParlAI: 9439\n\n\nUKPLab/sentence-transformers: 8461\n\n\nlucidrains/DALLE-pytorch: 4816\n\n\nThilinaRajapakse/simpletransformers: 3303\n\n\nneuml/txtai: 2530\n\n\nQData/TextAttack: 2087\n\n\nlukas-blecher/LaTeX-OCR: 1981\n\n\nutterworks/fast-bert: 1760\n\n\n*Repository*\n\n\nhuggingface/transformers: 70480\n\n\nhankcs/HanLP: 26958\n\n\nRasaHQ/rasa: 14842\n\n\nfacebookresearch/ParlAI: 9440\n\n\ngradio-app/gradio: 9169\n\n\nUKPLab/sentence-transformers: 8462\n\n\nmicrosoft/unilm: 6650\n\n\nEleutherAI/gpt-neo: 6431\n\n\nmoyix/fauxpilot: 6300\n\n\nlucidrains/DALLE-pytorch: 4816", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 11 packages that have more than 200 forks.\n\n\nThere are 39 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 16158\n\n\nhankcs/HanLP: 7388\n\n\nfacebookresearch/ParlAI: 1920\n\n\nUKPLab/sentence-transformers: 1695\n\n\nThilinaRajapakse/simpletransformers: 658\n\n\nlucidrains/DALLE-pytorch: 543\n\n\nutterworks/fast-bert: 336\n\n\nnyu-mll/jiant: 273\n\n\nQData/TextAttack: 269\n\n\nlukas-blecher/LaTeX-OCR: 245\n\n\n*Repository*\n\n\nhuggingface/transformers: 16157\n\n\nhankcs/HanLP: 7388\n\n\nRasaHQ/rasa: 4105\n\n\nplotly/dash-sample-apps: 2795\n\n\nfacebookresearch/ParlAI: 1920\n\n\nUKPLab/sentence-transformers: 1695\n\n\nmicrosoft/unilm: 1223\n\n\nopenvinotoolkit/open\\_model\\_zoo: 1207\n\n\nbhaveshlohana/HacktoberFest2020-Contributions: 1020\n\n\ndata-science-on-aws/data-science-on-aws: 884" ]
[ "TAGS\n#license-apache-2.0 #github-stars #region-us \n", "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 14 packages that have more than 1000 stars.\n\n\nThere are 41 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 70475\n\n\nhankcs/HanLP: 26958\n\n\nfacebookresearch/ParlAI: 9439\n\n\nUKPLab/sentence-transformers: 8461\n\n\nlucidrains/DALLE-pytorch: 4816\n\n\nThilinaRajapakse/simpletransformers: 3303\n\n\nneuml/txtai: 2530\n\n\nQData/TextAttack: 2087\n\n\nlukas-blecher/LaTeX-OCR: 1981\n\n\nutterworks/fast-bert: 1760\n\n\n*Repository*\n\n\nhuggingface/transformers: 70480\n\n\nhankcs/HanLP: 26958\n\n\nRasaHQ/rasa: 14842\n\n\nfacebookresearch/ParlAI: 9440\n\n\ngradio-app/gradio: 9169\n\n\nUKPLab/sentence-transformers: 8462\n\n\nmicrosoft/unilm: 6650\n\n\nEleutherAI/gpt-neo: 6431\n\n\nmoyix/fauxpilot: 6300\n\n\nlucidrains/DALLE-pytorch: 4816", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 11 packages that have more than 200 forks.\n\n\nThere are 39 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 16158\n\n\nhankcs/HanLP: 7388\n\n\nfacebookresearch/ParlAI: 1920\n\n\nUKPLab/sentence-transformers: 1695\n\n\nThilinaRajapakse/simpletransformers: 658\n\n\nlucidrains/DALLE-pytorch: 543\n\n\nutterworks/fast-bert: 336\n\n\nnyu-mll/jiant: 273\n\n\nQData/TextAttack: 269\n\n\nlukas-blecher/LaTeX-OCR: 245\n\n\n*Repository*\n\n\nhuggingface/transformers: 16157\n\n\nhankcs/HanLP: 7388\n\n\nRasaHQ/rasa: 4105\n\n\nplotly/dash-sample-apps: 2795\n\n\nfacebookresearch/ParlAI: 1920\n\n\nUKPLab/sentence-transformers: 1695\n\n\nmicrosoft/unilm: 1223\n\n\nopenvinotoolkit/open\\_model\\_zoo: 1207\n\n\nbhaveshlohana/HacktoberFest2020-Contributions: 1020\n\n\ndata-science-on-aws/data-science-on-aws: 884" ]
f90059cc985dd576947151f36883ca3607f2a195
# datasets metrics This dataset contains metrics about the huggingface/datasets package. Number of repositories in the dataset: 4997 Number of packages in the dataset: 215 ## Package dependents This contains the data available in the [used-by](https://github.com/huggingface/datasets/network/dependents) tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. Package | Repository :-------------------------:|:-------------------------: ![datasets-dependent package star count](./datasets-dependents/resolve/main/datasets-dependent_package_star_count.png) | ![datasets-dependent repository star count](./datasets-dependents/resolve/main/datasets-dependent_repository_star_count.png) There are 22 packages that have more than 1000 stars. There are 43 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 70480 [fastai/fastbook](https://github.com/fastai/fastbook): 16052 [jina-ai/jina](https://github.com/jina-ai/jina): 16052 [borisdayma/dalle-mini](https://github.com/borisdayma/dalle-mini): 12873 [allenai/allennlp](https://github.com/allenai/allennlp): 11198 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9440 [huggingface/tokenizers](https://github.com/huggingface/tokenizers): 5867 [huggingface/diffusers](https://github.com/huggingface/diffusers): 5457 [PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP): 5422 [HIT-SCIR/ltp](https://github.com/HIT-SCIR/ltp): 4058 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 70481 [google-research/google-research](https://github.com/google-research/google-research): 25092 [ray-project/ray](https://github.com/ray-project/ray): 22047 [allenai/allennlp](https://github.com/allenai/allennlp): 11198 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 9440 [gradio-app/gradio](https://github.com/gradio-app/gradio): 9169 [aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples): 7343 [microsoft/unilm](https://github.com/microsoft/unilm): 6650 [deeppavlov/DeepPavlov](https://github.com/deeppavlov/DeepPavlov): 5844 [huggingface/diffusers](https://github.com/huggingface/diffusers): 5457 ### Package & Repository fork count This section shows the package and repository fork count, individually. Package | Repository :-------------------------:|:-------------------------: ![datasets-dependent package forks count](./datasets-dependents/resolve/main/datasets-dependent_package_forks_count.png) | ![datasets-dependent repository forks count](./datasets-dependents/resolve/main/datasets-dependent_repository_forks_count.png) There are 17 packages that have more than 200 forks. There are 40 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [fastai/fastbook](https://github.com/fastai/fastbook): 6033 [allenai/allennlp](https://github.com/allenai/allennlp): 2218 [jina-ai/jina](https://github.com/jina-ai/jina): 1967 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920 [PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP): 1583 [HIT-SCIR/ltp](https://github.com/HIT-SCIR/ltp): 988 [borisdayma/dalle-mini](https://github.com/borisdayma/dalle-mini): 945 [ThilinaRajapakse/simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers): 658 [huggingface/tokenizers](https://github.com/huggingface/tokenizers): 502 *Repository* [huggingface/transformers](https://github.com/huggingface/transformers): 16157 [google-research/google-research](https://github.com/google-research/google-research): 6139 [aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples): 5493 [ray-project/ray](https://github.com/ray-project/ray): 3876 [allenai/allennlp](https://github.com/allenai/allennlp): 2218 [facebookresearch/ParlAI](https://github.com/facebookresearch/ParlAI): 1920 [PaddlePaddle/PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP): 1583 [x4nth055/pythoncode-tutorials](https://github.com/x4nth055/pythoncode-tutorials): 1435 [microsoft/unilm](https://github.com/microsoft/unilm): 1223 [deeppavlov/DeepPavlov](https://github.com/deeppavlov/DeepPavlov): 1055
open-source-metrics/datasets-dependents
[ "license:apache-2.0", "github-stars", "region:us" ]
2022-09-05T14:38:22+00:00
{"license": "apache-2.0", "pretty_name": "datasets metrics", "tags": ["github-stars"], "dataset_info": {"features": [{"name": "name", "dtype": "string"}, {"name": "stars", "dtype": "int64"}, {"name": "forks", "dtype": "int64"}], "splits": [{"name": "package", "num_bytes": 15485, "num_examples": 376}, {"name": "repository", "num_bytes": 503612, "num_examples": 10931}], "download_size": 310753, "dataset_size": 519097}}
2024-02-16T20:05:31+00:00
[]
[]
TAGS #license-apache-2.0 #github-stars #region-us
datasets metrics ================ This dataset contains metrics about the huggingface/datasets package. Number of repositories in the dataset: 4997 Number of packages in the dataset: 215 Package dependents ------------------ This contains the data available in the used-by tab on GitHub. ### Package & Repository star count This section shows the package and repository star count, individually. There are 22 packages that have more than 1000 stars. There are 43 repositories that have more than 1000 stars. The top 10 in each category are the following: *Package* huggingface/transformers: 70480 fastai/fastbook: 16052 jina-ai/jina: 16052 borisdayma/dalle-mini: 12873 allenai/allennlp: 11198 facebookresearch/ParlAI: 9440 huggingface/tokenizers: 5867 huggingface/diffusers: 5457 PaddlePaddle/PaddleNLP: 5422 HIT-SCIR/ltp: 4058 *Repository* huggingface/transformers: 70481 google-research/google-research: 25092 ray-project/ray: 22047 allenai/allennlp: 11198 facebookresearch/ParlAI: 9440 gradio-app/gradio: 9169 aws/amazon-sagemaker-examples: 7343 microsoft/unilm: 6650 deeppavlov/DeepPavlov: 5844 huggingface/diffusers: 5457 ### Package & Repository fork count This section shows the package and repository fork count, individually. There are 17 packages that have more than 200 forks. There are 40 repositories that have more than 200 forks. The top 10 in each category are the following: *Package* huggingface/transformers: 16157 fastai/fastbook: 6033 allenai/allennlp: 2218 jina-ai/jina: 1967 facebookresearch/ParlAI: 1920 PaddlePaddle/PaddleNLP: 1583 HIT-SCIR/ltp: 988 borisdayma/dalle-mini: 945 ThilinaRajapakse/simpletransformers: 658 huggingface/tokenizers: 502 *Repository* huggingface/transformers: 16157 google-research/google-research: 6139 aws/amazon-sagemaker-examples: 5493 ray-project/ray: 3876 allenai/allennlp: 2218 facebookresearch/ParlAI: 1920 PaddlePaddle/PaddleNLP: 1583 x4nth055/pythoncode-tutorials: 1435 microsoft/unilm: 1223 deeppavlov/DeepPavlov: 1055
[ "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 22 packages that have more than 1000 stars.\n\n\nThere are 43 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 70480\n\n\nfastai/fastbook: 16052\n\n\njina-ai/jina: 16052\n\n\nborisdayma/dalle-mini: 12873\n\n\nallenai/allennlp: 11198\n\n\nfacebookresearch/ParlAI: 9440\n\n\nhuggingface/tokenizers: 5867\n\n\nhuggingface/diffusers: 5457\n\n\nPaddlePaddle/PaddleNLP: 5422\n\n\nHIT-SCIR/ltp: 4058\n\n\n*Repository*\n\n\nhuggingface/transformers: 70481\n\n\ngoogle-research/google-research: 25092\n\n\nray-project/ray: 22047\n\n\nallenai/allennlp: 11198\n\n\nfacebookresearch/ParlAI: 9440\n\n\ngradio-app/gradio: 9169\n\n\naws/amazon-sagemaker-examples: 7343\n\n\nmicrosoft/unilm: 6650\n\n\ndeeppavlov/DeepPavlov: 5844\n\n\nhuggingface/diffusers: 5457", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 17 packages that have more than 200 forks.\n\n\nThere are 40 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 16157\n\n\nfastai/fastbook: 6033\n\n\nallenai/allennlp: 2218\n\n\njina-ai/jina: 1967\n\n\nfacebookresearch/ParlAI: 1920\n\n\nPaddlePaddle/PaddleNLP: 1583\n\n\nHIT-SCIR/ltp: 988\n\n\nborisdayma/dalle-mini: 945\n\n\nThilinaRajapakse/simpletransformers: 658\n\n\nhuggingface/tokenizers: 502\n\n\n*Repository*\n\n\nhuggingface/transformers: 16157\n\n\ngoogle-research/google-research: 6139\n\n\naws/amazon-sagemaker-examples: 5493\n\n\nray-project/ray: 3876\n\n\nallenai/allennlp: 2218\n\n\nfacebookresearch/ParlAI: 1920\n\n\nPaddlePaddle/PaddleNLP: 1583\n\n\nx4nth055/pythoncode-tutorials: 1435\n\n\nmicrosoft/unilm: 1223\n\n\ndeeppavlov/DeepPavlov: 1055" ]
[ "TAGS\n#license-apache-2.0 #github-stars #region-us \n", "### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 22 packages that have more than 1000 stars.\n\n\nThere are 43 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 70480\n\n\nfastai/fastbook: 16052\n\n\njina-ai/jina: 16052\n\n\nborisdayma/dalle-mini: 12873\n\n\nallenai/allennlp: 11198\n\n\nfacebookresearch/ParlAI: 9440\n\n\nhuggingface/tokenizers: 5867\n\n\nhuggingface/diffusers: 5457\n\n\nPaddlePaddle/PaddleNLP: 5422\n\n\nHIT-SCIR/ltp: 4058\n\n\n*Repository*\n\n\nhuggingface/transformers: 70481\n\n\ngoogle-research/google-research: 25092\n\n\nray-project/ray: 22047\n\n\nallenai/allennlp: 11198\n\n\nfacebookresearch/ParlAI: 9440\n\n\ngradio-app/gradio: 9169\n\n\naws/amazon-sagemaker-examples: 7343\n\n\nmicrosoft/unilm: 6650\n\n\ndeeppavlov/DeepPavlov: 5844\n\n\nhuggingface/diffusers: 5457", "### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 17 packages that have more than 200 forks.\n\n\nThere are 40 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhuggingface/transformers: 16157\n\n\nfastai/fastbook: 6033\n\n\nallenai/allennlp: 2218\n\n\njina-ai/jina: 1967\n\n\nfacebookresearch/ParlAI: 1920\n\n\nPaddlePaddle/PaddleNLP: 1583\n\n\nHIT-SCIR/ltp: 988\n\n\nborisdayma/dalle-mini: 945\n\n\nThilinaRajapakse/simpletransformers: 658\n\n\nhuggingface/tokenizers: 502\n\n\n*Repository*\n\n\nhuggingface/transformers: 16157\n\n\ngoogle-research/google-research: 6139\n\n\naws/amazon-sagemaker-examples: 5493\n\n\nray-project/ray: 3876\n\n\nallenai/allennlp: 2218\n\n\nfacebookresearch/ParlAI: 1920\n\n\nPaddlePaddle/PaddleNLP: 1583\n\n\nx4nth055/pythoncode-tutorials: 1435\n\n\nmicrosoft/unilm: 1223\n\n\ndeeppavlov/DeepPavlov: 1055" ]
4cae50882a24a955155db7d170b571e93ab8102f
# POS Tagging Dataset ## Original Data Source #### Conll2003 E. F. Tjong Kim Sang and F. De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT- NAACL 2003, 2003, pp. 142–147. #### The Peen Treebank M. P. Marcus, B. Santorini and M. A. Marcinkiewicz, Comput. Linguist., 1993, 19, 313–330. ## Citation BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
batterydata/pos_tagging
[ "task_categories:token-classification", "language:en", "license:apache-2.0", "region:us" ]
2022-09-05T14:44:21+00:00
{"language": ["en"], "license": ["apache-2.0"], "task_categories": ["token-classification"], "pretty_name": "Part-of-speech(POS) Tagging Dataset for BatteryDataExtractor"}
2022-09-05T15:05:33+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #language-English #license-apache-2.0 #region-us
# POS Tagging Dataset ## Original Data Source #### Conll2003 E. F. Tjong Kim Sang and F. De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT- NAACL 2003, 2003, pp. 142–147. #### The Peen Treebank M. P. Marcus, B. Santorini and M. A. Marcinkiewicz, Comput. Linguist., 1993, 19, 313–330. BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
[ "# POS Tagging Dataset", "## Original Data Source", "#### Conll2003\n\nE. F. Tjong Kim Sang and F. De Meulder, Proceedings of the\nSeventh Conference on Natural Language Learning at HLT-\nNAACL 2003, 2003, pp. 142–147.", "#### The Peen Treebank\n\nM. P. Marcus, B. Santorini and M. A. Marcinkiewicz, Comput.\nLinguist., 1993, 19, 313–330.\n\nBatteryDataExtractor: battery-aware text-mining software embedded with BERT models" ]
[ "TAGS\n#task_categories-token-classification #language-English #license-apache-2.0 #region-us \n", "# POS Tagging Dataset", "## Original Data Source", "#### Conll2003\n\nE. F. Tjong Kim Sang and F. De Meulder, Proceedings of the\nSeventh Conference on Natural Language Learning at HLT-\nNAACL 2003, 2003, pp. 142–147.", "#### The Peen Treebank\n\nM. P. Marcus, B. Santorini and M. A. Marcinkiewicz, Comput.\nLinguist., 1993, 19, 313–330.\n\nBatteryDataExtractor: battery-aware text-mining software embedded with BERT models" ]
39190a2140c5fc237fed556ef88449015271850b
# Abbreviation Detection Dataset ## Original Data Source #### PLOS I. Zilio, H. Saadany, P. Sharma, D. Kanojia and C. Orasan, PLOD: An Abbreviation Detection Dataset for Scientific Docu- ments, 2022, https://arxiv.org/abs/2204.12061. #### SDU@AAAI-21 A. P. B. Veyseh, F. Dernoncourt, Q. H. Tran and T. H. Nguyen, Proceedings of the 28th International Conference on Compu- tational Linguistics, 2020, pp. 3285–3301 ## Citation BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
batterydata/abbreviation_detection
[ "task_categories:token-classification", "language:en", "license:apache-2.0", "arxiv:2204.12061", "region:us" ]
2022-09-05T14:46:13+00:00
{"language": ["en"], "license": ["apache-2.0"], "task_categories": ["token-classification"], "pretty_name": "Abbreviation Detection Dataset for BatteryDataExtractor"}
2022-09-05T15:02:48+00:00
[ "2204.12061" ]
[ "en" ]
TAGS #task_categories-token-classification #language-English #license-apache-2.0 #arxiv-2204.12061 #region-us
# Abbreviation Detection Dataset ## Original Data Source #### PLOS I. Zilio, H. Saadany, P. Sharma, D. Kanojia and C. Orasan, PLOD: An Abbreviation Detection Dataset for Scientific Docu- ments, 2022, URL #### SDU@AAAI-21 A. P. B. Veyseh, F. Dernoncourt, Q. H. Tran and T. H. Nguyen, Proceedings of the 28th International Conference on Compu- tational Linguistics, 2020, pp. 3285–3301 BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
[ "# Abbreviation Detection Dataset", "## Original Data Source", "#### PLOS\n\nI. Zilio, H. Saadany, P. Sharma, D. Kanojia and C. Orasan,\nPLOD: An Abbreviation Detection Dataset for Scientific Docu-\nments, 2022, URL", "#### SDU@AAAI-21\n\nA. P. B. Veyseh, F. Dernoncourt, Q. H. Tran and T. H. Nguyen,\nProceedings of the 28th International Conference on Compu-\ntational Linguistics, 2020, pp. 3285–3301\n\nBatteryDataExtractor: battery-aware text-mining software embedded with BERT models" ]
[ "TAGS\n#task_categories-token-classification #language-English #license-apache-2.0 #arxiv-2204.12061 #region-us \n", "# Abbreviation Detection Dataset", "## Original Data Source", "#### PLOS\n\nI. Zilio, H. Saadany, P. Sharma, D. Kanojia and C. Orasan,\nPLOD: An Abbreviation Detection Dataset for Scientific Docu-\nments, 2022, URL", "#### SDU@AAAI-21\n\nA. P. B. Veyseh, F. Dernoncourt, Q. H. Tran and T. H. Nguyen,\nProceedings of the 28th International Conference on Compu-\ntational Linguistics, 2020, pp. 3285–3301\n\nBatteryDataExtractor: battery-aware text-mining software embedded with BERT models" ]
4976bb5ace12abe22747787d3663a203946c319e
# CNER Dataset ## Original Data Source #### CHEMDNER M. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado, Z. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf., 2015, 7, 1–17. #### MatScholar I. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre- wartha, K. A. Persson, G. Ceder and A. Jain, J. Chem. Inf. Model., 2019, 59, 3692–3702. #### SOFC A. Friedrich, H. Adel, F. Tomazic, J. Hingerl, R. Benteau, A. Maruscyk and L. Lange, The SOFC-exp corpus and neural approaches to information extraction in the materials science domain, 2020, https://arxiv.org/abs/2006.03039. #### BioNLP G. Crichton, S. Pyysalo, B. Chiu and A. Korhonen, BMC Bioinf., 2017, 18, 1–14. ## Citation BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
batterydata/cner
[ "task_categories:token-classification", "language:en", "license:apache-2.0", "arxiv:2006.03039", "region:us" ]
2022-09-05T14:49:33+00:00
{"language": ["en"], "license": ["apache-2.0"], "task_categories": ["token-classification"], "pretty_name": "Chemical Named Entity Recognition (CNER) Dataset for BatteryDataExtractor"}
2022-09-05T15:07:43+00:00
[ "2006.03039" ]
[ "en" ]
TAGS #task_categories-token-classification #language-English #license-apache-2.0 #arxiv-2006.03039 #region-us
# CNER Dataset ## Original Data Source #### CHEMDNER M. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado, Z. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf., 2015, 7, 1–17. #### MatScholar I. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre- wartha, K. A. Persson, G. Ceder and A. Jain, J. Chem. Inf. Model., 2019, 59, 3692–3702. #### SOFC A. Friedrich, H. Adel, F. Tomazic, J. Hingerl, R. Benteau, A. Maruscyk and L. Lange, The SOFC-exp corpus and neural approaches to information extraction in the materials science domain, 2020, URL #### BioNLP G. Crichton, S. Pyysalo, B. Chiu and A. Korhonen, BMC Bioinf., 2017, 18, 1–14. BatteryDataExtractor: battery-aware text-mining software embedded with BERT models
[ "# CNER Dataset", "## Original Data Source", "#### CHEMDNER\nM. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado,\nZ. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf.,\n2015, 7, 1–17.", "#### MatScholar\nI. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre-\nwartha, K. A. Persson, G. Ceder and A. Jain, J. Chem. Inf.\nModel., 2019, 59, 3692–3702.", "#### SOFC\nA. Friedrich, H. Adel, F. Tomazic, J. Hingerl, R. Benteau,\nA. Maruscyk and L. Lange, The SOFC-exp corpus and neural\napproaches to information extraction in the materials science\ndomain, 2020, URL", "#### BioNLP\nG. Crichton, S. Pyysalo, B. Chiu and A. Korhonen, BMC Bioinf.,\n2017, 18, 1–14.\n\nBatteryDataExtractor: battery-aware text-mining software embedded with BERT models" ]
[ "TAGS\n#task_categories-token-classification #language-English #license-apache-2.0 #arxiv-2006.03039 #region-us \n", "# CNER Dataset", "## Original Data Source", "#### CHEMDNER\nM. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado,\nZ. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf.,\n2015, 7, 1–17.", "#### MatScholar\nI. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre-\nwartha, K. A. Persson, G. Ceder and A. Jain, J. Chem. Inf.\nModel., 2019, 59, 3692–3702.", "#### SOFC\nA. Friedrich, H. Adel, F. Tomazic, J. Hingerl, R. Benteau,\nA. Maruscyk and L. Lange, The SOFC-exp corpus and neural\napproaches to information extraction in the materials science\ndomain, 2020, URL", "#### BioNLP\nG. Crichton, S. Pyysalo, B. Chiu and A. Korhonen, BMC Bioinf.,\n2017, 18, 1–14.\n\nBatteryDataExtractor: battery-aware text-mining software embedded with BERT models" ]
3d2bbff4d30d5c41d2cbf5b1d55fbc8d10cfdbaa
# Dataset Card for Code Comment Classification ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/poojaruhal/RP-class-comment-classification - **Repository:** https://github.com/poojaruhal/RP-class-comment-classification - **Paper:** https://doi.org/10.1016/j.jss.2021.111047 - **Point of Contact:** https://poojaruhal.github.io ### Dataset Summary The dataset contains class comments extracted from various big and diverse open-source projects of three programming languages Java, Smalltalk, and Python. ### Supported Tasks and Leaderboards Single-label text classification and Multi-label text classification ### Languages Java, Python, Smalltalk ## Dataset Structure ### Data Instances ```json { "class" : "Absy.java", "comment":"* Azure Blob File System implementation of AbstractFileSystem. * This impl delegates to the old FileSystem", "summary":"Azure Blob File System implementation of AbstractFileSystem.", "expand":"This impl delegates to the old FileSystem", "rational":"", "deprecation":"", "usage":"", "exception":"", "todo":"", "incomplete":"", "commentedcode":"", "directive":"", "formatter":"", "license":"", "ownership":"", "pointer":"", "autogenerated":"", "noise":"", "warning":"", "recommendation":"", "precondition":"", "codingGuidelines":"", "extension":"", "subclassexplnation":"", "observation":"", } ``` ### Data Fields class: name of the class with the language extension. comment: class comment of the class categories: a category that sentence is classified to. It indicated a particular type of information. ### Data Splits 10-fold cross validation ## Dataset Creation ### Curation Rationale To identify the infomation embedded in the class comments across various projects and programming languages. ### Source Data #### Initial Data Collection and Normalization It contains the dataset extracted from various open-source projects of three programming languages Java, Smalltalk, and Python. - #### Java Each file contains all the extracted class comments from one project. We have a total of six java projects. We chose a sample of 350 comments from all these files for our experiment. - [Eclipse.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/) - Extracted class comments from the Eclipse project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Eclipse](https://github.com/eclipse). - [Guava.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Guava.csv) - Extracted class comments from the Guava project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Guava](https://github.com/google/guava). - [Guice.csv](/https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Guice.csv) - Extracted class comments from the Guice project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Guice](https://github.com/google/guice). - [Hadoop.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Hadoop.csv) - Extracted class comments from the Hadoop project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Apache Hadoop](https://github.com/apache/hadoop) - [Spark.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Spark.csv) - Extracted class comments from the Apache Spark project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Apache Spark](https://github.com/apache/spark) - [Vaadin.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Vaadin.csv) - Extracted class comments from the Vaadin project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Vaadin](https://github.com/vaadin/framework) - [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Java/Parser_Details.md) - Details of the parser used to parse class comments of Java [ Projects](https://doi.org/10.5281/zenodo.4311839) - #### Smalltalk/ Each file contains all the extracted class comments from one project. We have a total of seven Pharo projects. We chose a sample of 350 comments from all these files for our experiment. - [GToolkit.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/GToolkit.csv) - Extracted class comments from the GToolkit project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Moose.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Moose.csv) - Extracted class comments from the Moose project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [PetitParser.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/PetitParser.csv) - Extracted class comments from the PetitParser project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Pillar.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Pillar.csv) - Extracted class comments from the Pillar project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [PolyMath.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/PolyMath.csv) - Extracted class comments from the PolyMath project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Roassal2.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Roassal2.csv) -Extracted class comments from the Roassal2 project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Seaside.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Seaside.csv) - Extracted class comments from the Seaside project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. - [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Pharo/Parser_Details.md) - Details of the parser used to parse class comments of Pharo [ Projects](https://doi.org/10.5281/zenodo.4311839) - #### Python/ Each file contains all the extracted class comments from one project. We have a total of seven Python projects. We chose a sample of 350 comments from all these files for our experiment. - [Django.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Django.csv) - Extracted class comments from the Django project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Django](https://github.com/django) - [IPython.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/IPython.csv) - Extracted class comments from the Ipython project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub[IPython](https://github.com/ipython/ipython) - [Mailpile.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Mailpile.csv) - Extracted class comments from the Mailpile project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Mailpile](https://github.com/mailpile/Mailpile) - [Pandas.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pandas.csv) - Extracted class comments from the Pandas project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [pandas](https://github.com/pandas-dev/pandas) - [Pipenv.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pipenv.csv) - Extracted class comments from the Pipenv project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Pipenv](https://github.com/pypa/pipenv) - [Pytorch.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Pytorch.csv) - Extracted class comments from the Pytorch project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [PyTorch](https://github.com/pytorch/pytorch) - [Requests.csv](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Requests.csv) - Extracted class comments from the Requests project. The version of the project referred to extract class comments is available as [Raw Dataset](https://doi.org/10.5281/zenodo.4311839) on Zenodo. More detail about the project is available on GitHub [Requests](https://github.com/psf/requests/) - [Parser_Details.md](https://github.com/poojaruhal/RP-class-comment-classification/tree/main/Dataset/RQ1/Python/Parser_Details.md) - Details of the parser used to parse class comments of Python [ Projects](https://doi.org/10.5281/zenodo.4311839) ### Annotations #### Annotation process Four evaluators (all authors of this paper (https://doi.org/10.1016/j.jss.2021.111047)), each having at least four years of programming experience, participated in the annonation process. We partitioned Java, Python, and Smalltalk comments equally among all evaluators based on the distribution of the language's dataset to ensure the inclusion of comments from all projects and diversified lengths. Each classification is reviewed by three evaluators. The details are given in the paper [Rani et al., JSS, 2021](https://doi.org/10.1016/j.jss.2021.111047) #### Who are the annotators? [Rani et al., JSS, 2021](https://doi.org/10.1016/j.jss.2021.111047) ### Personal and Sensitive Information Author information embedded in the text ## Additional Information ### Dataset Curators [Pooja Rani, Ivan, Manuel] ### Licensing Information [license: cc-by-nc-sa-4.0] ### Citation Information ``` @article{RANI2021111047, title = {How to identify class comment types? A multi-language approach for class comment classification}, journal = {Journal of Systems and Software}, volume = {181}, pages = {111047}, year = {2021}, issn = {0164-1212}, doi = {https://doi.org/10.1016/j.jss.2021.111047}, url = {https://www.sciencedirect.com/science/article/pii/S0164121221001448}, author = {Pooja Rani and Sebastiano Panichella and Manuel Leuenberger and Andrea {Di Sorbo} and Oscar Nierstrasz}, keywords = {Natural language processing technique, Code comment analysis, Software documentation} } ```
poojaruhal/Code-comment-classification
[ "task_categories:text-classification", "task_ids:intent-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:cc-by-nc-sa-4.0", "'source code comments'", "'java class comments'", "'python class comments'", "'\nsmalltalk class comments'", "region:us" ]
2022-09-05T20:25:33+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification", "multi-label-classification"], "pretty_name": "Code-comment-classification\n", "tags": ["'source code comments'", "'java class comments'", "'python class comments'", "'\nsmalltalk class comments'"]}
2022-10-16T10:11:46+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-intent-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #'source code comments' #'java class comments' #'python class comments' #' smalltalk class comments' #region-us
# Dataset Card for Code Comment Classification ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Point of Contact: URL ### Dataset Summary The dataset contains class comments extracted from various big and diverse open-source projects of three programming languages Java, Smalltalk, and Python. ### Supported Tasks and Leaderboards Single-label text classification and Multi-label text classification ### Languages Java, Python, Smalltalk ## Dataset Structure ### Data Instances ### Data Fields class: name of the class with the language extension. comment: class comment of the class categories: a category that sentence is classified to. It indicated a particular type of information. ### Data Splits 10-fold cross validation ## Dataset Creation ### Curation Rationale To identify the infomation embedded in the class comments across various projects and programming languages. ### Source Data #### Initial Data Collection and Normalization It contains the dataset extracted from various open-source projects of three programming languages Java, Smalltalk, and Python. - #### Java Each file contains all the extracted class comments from one project. We have a total of six java projects. We chose a sample of 350 comments from all these files for our experiment. - URL - Extracted class comments from the Eclipse project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Eclipse. - URL - Extracted class comments from the Guava project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Guava. - URL - Extracted class comments from the Guice project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Guice. - URL - Extracted class comments from the Hadoop project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Apache Hadoop - URL - Extracted class comments from the Apache Spark project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Apache Spark - URL - Extracted class comments from the Vaadin project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Vaadin - Parser_Details.md - Details of the parser used to parse class comments of Java Projects - #### Smalltalk/ Each file contains all the extracted class comments from one project. We have a total of seven Pharo projects. We chose a sample of 350 comments from all these files for our experiment. - URL - Extracted class comments from the GToolkit project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. - URL - Extracted class comments from the Moose project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. - URL - Extracted class comments from the PetitParser project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. - URL - Extracted class comments from the Pillar project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. - URL - Extracted class comments from the PolyMath project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. - URL -Extracted class comments from the Roassal2 project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. - URL - Extracted class comments from the Seaside project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. - Parser_Details.md - Details of the parser used to parse class comments of Pharo Projects - #### Python/ Each file contains all the extracted class comments from one project. We have a total of seven Python projects. We chose a sample of 350 comments from all these files for our experiment. - URL - Extracted class comments from the Django project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Django - URL - Extracted class comments from the Ipython project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHubIPython - URL - Extracted class comments from the Mailpile project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Mailpile - URL - Extracted class comments from the Pandas project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub pandas - URL - Extracted class comments from the Pipenv project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Pipenv - URL - Extracted class comments from the Pytorch project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub PyTorch - URL - Extracted class comments from the Requests project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Requests - Parser_Details.md - Details of the parser used to parse class comments of Python Projects ### Annotations #### Annotation process Four evaluators (all authors of this paper (URL each having at least four years of programming experience, participated in the annonation process. We partitioned Java, Python, and Smalltalk comments equally among all evaluators based on the distribution of the language's dataset to ensure the inclusion of comments from all projects and diversified lengths. Each classification is reviewed by three evaluators. The details are given in the paper Rani et al., JSS, 2021 #### Who are the annotators? Rani et al., JSS, 2021 ### Personal and Sensitive Information Author information embedded in the text ## Additional Information ### Dataset Curators [Pooja Rani, Ivan, Manuel] ### Licensing Information [license: cc-by-nc-sa-4.0]
[ "# Dataset Card for Code Comment Classification", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nThe dataset contains class comments extracted from various big and diverse open-source projects of three programming languages Java, Smalltalk, and Python.", "### Supported Tasks and Leaderboards\n\nSingle-label text classification and Multi-label text classification", "### Languages\n\nJava, Python, Smalltalk", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nclass: name of the class with the language extension.\n\ncomment: class comment of the class\n\ncategories: a category that sentence is classified to. It indicated a particular type of information.", "### Data Splits\n\n10-fold cross validation", "## Dataset Creation", "### Curation Rationale\n\nTo identify the infomation embedded in the class comments across various projects and programming languages.", "### Source Data", "#### Initial Data Collection and Normalization\n\nIt contains the dataset extracted from various open-source projects of three programming languages Java, Smalltalk, and Python.\n- #### Java \n Each file contains all the extracted class comments from one project. We have a total of six java projects. We chose a sample of 350 comments from all these files for our experiment.\n - URL - Extracted class comments from the Eclipse project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Eclipse.\n \n - URL - Extracted class comments from the Guava project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Guava.\n \n - URL - Extracted class comments from the Guice project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Guice.\n \n - URL - Extracted class comments from the Hadoop project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Apache Hadoop\n \n - URL - Extracted class comments from the Apache Spark project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Apache Spark\n \n - URL - Extracted class comments from the Vaadin project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Vaadin\n \n - Parser_Details.md - Details of the parser used to parse class comments of Java Projects\n\n- #### Smalltalk/\n Each file contains all the extracted class comments from one project. We have a total of seven Pharo projects. We chose a sample of 350 comments from all these files for our experiment.\n - URL - Extracted class comments from the GToolkit project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. \n \n - URL - Extracted class comments from the Moose project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. \n \n - URL - Extracted class comments from the PetitParser project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - URL - Extracted class comments from the Pillar project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - URL - Extracted class comments from the PolyMath project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - URL -Extracted class comments from the Roassal2 project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - URL - Extracted class comments from the Seaside project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - Parser_Details.md - Details of the parser used to parse class comments of Pharo Projects\n\n- #### Python/\n Each file contains all the extracted class comments from one project. We have a total of seven Python projects. We chose a sample of 350 comments from all these files for our experiment.\n - URL - Extracted class comments from the Django project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Django\n \n - URL - Extracted class comments from the Ipython project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHubIPython\n \n - URL - Extracted class comments from the Mailpile project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Mailpile\n \n - URL - Extracted class comments from the Pandas project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub pandas\n \n - URL - Extracted class comments from the Pipenv project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Pipenv\n \n - URL - Extracted class comments from the Pytorch project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub PyTorch\n \n - URL - Extracted class comments from the Requests project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Requests\n \n - Parser_Details.md - Details of the parser used to parse class comments of Python Projects", "### Annotations", "#### Annotation process\nFour evaluators (all authors of this paper (URL each having at least four years of programming experience, participated in the annonation process.\nWe partitioned Java, Python, and Smalltalk comments equally among all evaluators based on the distribution of the language's dataset to ensure the inclusion of comments from all projects and diversified lengths. Each classification is reviewed by three evaluators. \nThe details are given in the paper Rani et al., JSS, 2021", "#### Who are the annotators?\n\nRani et al., JSS, 2021", "### Personal and Sensitive Information\n\nAuthor information embedded in the text", "## Additional Information", "### Dataset Curators\n\n[Pooja Rani, Ivan, Manuel]", "### Licensing Information\n\n[license: cc-by-nc-sa-4.0]" ]
[ "TAGS\n#task_categories-text-classification #task_ids-intent-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #'source code comments' #'java class comments' #'python class comments' #'\nsmalltalk class comments' #region-us \n", "# Dataset Card for Code Comment Classification", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Point of Contact: URL", "### Dataset Summary\nThe dataset contains class comments extracted from various big and diverse open-source projects of three programming languages Java, Smalltalk, and Python.", "### Supported Tasks and Leaderboards\n\nSingle-label text classification and Multi-label text classification", "### Languages\n\nJava, Python, Smalltalk", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nclass: name of the class with the language extension.\n\ncomment: class comment of the class\n\ncategories: a category that sentence is classified to. It indicated a particular type of information.", "### Data Splits\n\n10-fold cross validation", "## Dataset Creation", "### Curation Rationale\n\nTo identify the infomation embedded in the class comments across various projects and programming languages.", "### Source Data", "#### Initial Data Collection and Normalization\n\nIt contains the dataset extracted from various open-source projects of three programming languages Java, Smalltalk, and Python.\n- #### Java \n Each file contains all the extracted class comments from one project. We have a total of six java projects. We chose a sample of 350 comments from all these files for our experiment.\n - URL - Extracted class comments from the Eclipse project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Eclipse.\n \n - URL - Extracted class comments from the Guava project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Guava.\n \n - URL - Extracted class comments from the Guice project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Guice.\n \n - URL - Extracted class comments from the Hadoop project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Apache Hadoop\n \n - URL - Extracted class comments from the Apache Spark project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Apache Spark\n \n - URL - Extracted class comments from the Vaadin project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Vaadin\n \n - Parser_Details.md - Details of the parser used to parse class comments of Java Projects\n\n- #### Smalltalk/\n Each file contains all the extracted class comments from one project. We have a total of seven Pharo projects. We chose a sample of 350 comments from all these files for our experiment.\n - URL - Extracted class comments from the GToolkit project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. \n \n - URL - Extracted class comments from the Moose project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. \n \n - URL - Extracted class comments from the PetitParser project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - URL - Extracted class comments from the Pillar project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - URL - Extracted class comments from the PolyMath project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - URL -Extracted class comments from the Roassal2 project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - URL - Extracted class comments from the Seaside project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo.\n \n - Parser_Details.md - Details of the parser used to parse class comments of Pharo Projects\n\n- #### Python/\n Each file contains all the extracted class comments from one project. We have a total of seven Python projects. We chose a sample of 350 comments from all these files for our experiment.\n - URL - Extracted class comments from the Django project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Django\n \n - URL - Extracted class comments from the Ipython project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHubIPython\n \n - URL - Extracted class comments from the Mailpile project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Mailpile\n \n - URL - Extracted class comments from the Pandas project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub pandas\n \n - URL - Extracted class comments from the Pipenv project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Pipenv\n \n - URL - Extracted class comments from the Pytorch project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub PyTorch\n \n - URL - Extracted class comments from the Requests project. The version of the project referred to extract class comments is available as Raw Dataset on Zenodo. More detail about the project is available on GitHub Requests\n \n - Parser_Details.md - Details of the parser used to parse class comments of Python Projects", "### Annotations", "#### Annotation process\nFour evaluators (all authors of this paper (URL each having at least four years of programming experience, participated in the annonation process.\nWe partitioned Java, Python, and Smalltalk comments equally among all evaluators based on the distribution of the language's dataset to ensure the inclusion of comments from all projects and diversified lengths. Each classification is reviewed by three evaluators. \nThe details are given in the paper Rani et al., JSS, 2021", "#### Who are the annotators?\n\nRani et al., JSS, 2021", "### Personal and Sensitive Information\n\nAuthor information embedded in the text", "## Additional Information", "### Dataset Curators\n\n[Pooja Rani, Ivan, Manuel]", "### Licensing Information\n\n[license: cc-by-nc-sa-4.0]" ]
dbfb6932cd47473876f8869f8fae932cc9099edb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806176
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T22:51:53+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": [], "dataset_name": "big_patent", "dataset_config": "y", "dataset_split": "test", "col_mapping": {"text": "description", "target": "abstract"}}}
2022-09-07T02:32:35+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: big_patent\n* Config: y\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: big_patent\n* Config: y\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
214a9794ff850e1c35c9d22c58752e1ee0cd10df
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806177
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T22:51:57+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "big_patent", "dataset_config": "y", "dataset_split": "test", "col_mapping": {"text": "description", "target": "abstract"}}}
2022-09-06T09:16:50+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: big_patent\n* Config: y\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: big_patent\n* Config: y\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
f4f99ef293bfa13ce34d2cf7ece919d9776ff0ca
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/led-base-book-summary * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model.
autoevaluate/autoeval-staging-eval-big_patent-y-7d0862-15806178
[ "autotrain", "evaluation", "region:us" ]
2022-09-05T22:52:02+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["big_patent"], "eval_info": {"task": "summarization", "model": "pszemraj/led-base-book-summary", "metrics": [], "dataset_name": "big_patent", "dataset_config": "y", "dataset_split": "test", "col_mapping": {"text": "description", "target": "abstract"}}}
2022-09-06T15:50:20+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/led-base-book-summary * Dataset: big_patent * Config: y * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @pszemraj for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: big_patent\n* Config: y\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-base-book-summary\n* Dataset: big_patent\n* Config: y\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @pszemraj for evaluating this model." ]
f1e5518e824f5eaddfe81377a58ea18c329abb55
# Dataset Card for BIOSSES ## Dataset Description - **Homepage:** https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html - **Pubmed:** True - **Public:** True - **Tasks:** STS BIOSSES computes similarity of biomedical sentences by utilizing WordNet as the general domain ontology and UMLS as the biomedical domain specific ontology. The original paper outlines the approaches with respect to using annotator score as golden standard. Source view will return all annotator score individually whereas the Bigbio view will return the mean of the annotator score. ## Citation Information ``` @article{souganciouglu2017biosses, title={BIOSSES: a semantic sentence similarity estimation system for the biomedical domain}, author={Soğancıoğlu, Gizem, Hakime Öztürk, and Arzucan Özgür}, journal={Bioinformatics}, volume={33}, number={14}, pages={i49--i58}, year={2017}, publisher={Oxford University Press} } ```
bigbio/biosses
[ "multilinguality:monolingual", "language:en", "license:gpl-3.0", "region:us" ]
2022-09-06T00:12:20+00:00
{"language": ["en"], "license": "gpl-3.0", "multilinguality": "monolingual", "pretty_name": "BIOSSES", "bigbio_language": ["English"], "bigbio_license_shortname": "GPL_3p0", "homepage": "https://tabilab.cmpe.boun.edu.tr/BIOSSES/DataSet.html", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]}
2022-12-22T15:32:58+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-gpl-3.0 #region-us
# Dataset Card for BIOSSES ## Dataset Description - Homepage: URL - Pubmed: True - Public: True - Tasks: STS BIOSSES computes similarity of biomedical sentences by utilizing WordNet as the general domain ontology and UMLS as the biomedical domain specific ontology. The original paper outlines the approaches with respect to using annotator score as golden standard. Source view will return all annotator score individually whereas the Bigbio view will return the mean of the annotator score.
[ "# Dataset Card for BIOSSES", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: STS\n\nBIOSSES computes similarity of biomedical sentences by utilizing WordNet as the general domain ontology and UMLS as the biomedical domain specific ontology. The original paper outlines the approaches with respect to using annotator score as golden standard. Source view will return all annotator score individually whereas the Bigbio view will return the mean of the annotator score." ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-gpl-3.0 #region-us \n", "# Dataset Card for BIOSSES", "## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: STS\n\nBIOSSES computes similarity of biomedical sentences by utilizing WordNet as the general domain ontology and UMLS as the biomedical domain specific ontology. The original paper outlines the approaches with respect to using annotator score as golden standard. Source view will return all annotator score individually whereas the Bigbio view will return the mean of the annotator score." ]
d1cb85a2f99002f343fad318b7f3d9d1b308921f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-samsum-samsum-fbc19a-15816179
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T01:39:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "google/pegasus-xsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-09-06T01:43:18+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/pegasus-xsum * Dataset: samsum * Config: samsum * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @samuelallen123 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-xsum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/pegasus-xsum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
f716bbc8bd71337c4f04d64ba21af0a9043a76e3
# Dataset Card for UKP ASPECT ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998** - **Paper: https://aclanthology.org/P19-1054/** - **Leaderboard: n/a** - **Point of Contact: data\[at\]ukp.informatik.tu-darmstadt.de** - **(http://www.ukp.tu-darmstadt.de/)** ### Dataset Summary The UKP ASPECT Corpus includes 3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper). If you are having problems with downloading the dataset from the huggingface hub, please download it from [here](https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998). ### Supported Tasks and Leaderboards This dataset supports the following tasks: * Sentence pair classification * Topic classification ### Languages English ## Dataset Structure ### Data Instances Each instance consists of a topic, a pair of sentences, and an argument similarity label. ``` {"3d printing";"This could greatly increase the quality of life of those currently living in less than ideal conditions.";"The advent and spread of new technologies, like that of 3D printing can transform our lives in many ways.";"DTORCD"} ``` ### Data Fields * topic: the topic keywords used to retrieve the documents * sentence_1: the first sentence of the pair * sentence_2: the second sentence of the pair * label: the consolidated crowdsourced gold-standard annotation of the sentence pair (DTORCD, NS, SS, HS) * Different Topic/Can’t decide (DTORCD): Either one or both of the sentences belong to a topic different than the given one, or you can’t understand one or both sentences. If you choose this option, you need to very briefly explain, why you chose it (e.g.“The second sentence is not grammatical”, “The first sentence is from a different topic” etc.). * No Similarity (NS): The two arguments belong to the same topic, but they don’t show any similarity, i.e. they speak aboutcompletely different aspects of the topic * Some Similarity (SS): The two arguments belong to the same topic, showing semantic similarity on a few aspects, but thecentral message is rather different, or one argument is way less specific than the other * High Similarity (HS): The two arguments belong to the same topic, and they speak about the same aspect, e.g. using different words ### Data Splits The dataset currently does not contain standard data splits. ## Dataset Creation ### Curation Rationale This dataset contains sentence pairs annotated with argument similarity labels that can be used to evaluate argument clustering. ### Source Data #### Initial Data Collection and Normalization The UKP ASPECT corpus consists of sentences which have been identified as arguments for given topics using the ArgumenText system (Stab et al., 2018). The ArgumenText system expects as input an arbitrary topic (query) and searches a large web crawl for relevant documents. Finally, it classifies all sentences contained in the most relevant documents for a given query into pro, con or non-arguments (with regard to the given topic). We picked 28 topics related to currently discussed issues from technology and society. To balance the selection of argument pairs with regard to their similarity, we applied a weak supervision approach. For each of our 28 topics, we applied a sampling strategy that picks randomly two pro or con argument sentences at random, calculates their similarity using the system by Misra et al. (2016), and keeps pairs with a probability aiming to balance diversity across the entire similarity scale. This was repeated until we reached 3,595 arguments pairs, about 130 pairs for each topic. #### Who are the source language producers? Unidentified contributors to the world wide web. ### Annotations #### Annotation process The argument pairs were annotated on a range of three degrees of similarity (no, some, and high similarity) with the help of crowd workers on the Amazon Mechanical Turk platform. To account for unrelated pairs due to the sampling process, crowd workers could choose a fourth option. We collected seven assignments per pair and used Multi-Annotator Competence Estimation (MACE) with a threshold of 1.0 (Hovy et al., 2013) to consolidate votes into a gold standard. #### Who are the annotators? Crowd workers on Amazon Mechanical Turk ### Personal and Sensitive Information This dataset is fully anonymized. ## Additional Information You can download the data via: ``` from datasets import load_dataset dataset = load_dataset("UKPLab/UKP_ASPECT") ``` Please find more information about the code and how the data was collected in the [paper](https://aclanthology.org/P19-1054/). ### Dataset Curators Curation is managed by our [data manager](https://www.informatik.tu-darmstadt.de/ukp/research_ukp/ukp_research_data_and_software/ukp_data_and_software.en.jsp) at UKP. ### Licensing Information [CC-by-NC 3.0](https://creativecommons.org/licenses/by-nc/3.0/) ### Citation Information Please cite this data using: ``` @inproceedings{reimers2019classification, title={Classification and Clustering of Arguments with Contextualized Word Embeddings}, author={Reimers, Nils and Schiller, Benjamin and Beck, Tilman and Daxenberger, Johannes and Stab, Christian and Gurevych, Iryna}, booktitle={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, pages={567--578}, year={2019} } ``` ### Contributions Thanks to [@buenalaune](https://github.com/buenalaune) for adding this dataset. ## Tags annotations_creators: - crowdsourced language: - en language_creators: - found license: - cc-by-nc-3.0 multilinguality: - monolingual pretty_name: UKP ASPECT Corpus size_categories: - 1K<n<10K source_datasets: - original tags: - argument pair - argument similarity task_categories: - text-classification task_ids: - topic-classification - multi-input-text-classification - semantic-similarity-classification
UKPLab/UKP_ASPECT
[ "license:cc-by-nc-3.0", "region:us" ]
2022-09-06T07:30:15+00:00
{"license": "cc-by-nc-3.0"}
2023-06-19T07:18:13+00:00
[]
[]
TAGS #license-cc-by-nc-3.0 #region-us
# Dataset Card for UKP ASPECT ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Paper: URL - Leaderboard: n/a - Point of Contact: data\[at\]URL - (URL ### Dataset Summary The UKP ASPECT Corpus includes 3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper). If you are having problems with downloading the dataset from the huggingface hub, please download it from here. ### Supported Tasks and Leaderboards This dataset supports the following tasks: * Sentence pair classification * Topic classification ### Languages English ## Dataset Structure ### Data Instances Each instance consists of a topic, a pair of sentences, and an argument similarity label. ### Data Fields * topic: the topic keywords used to retrieve the documents * sentence_1: the first sentence of the pair * sentence_2: the second sentence of the pair * label: the consolidated crowdsourced gold-standard annotation of the sentence pair (DTORCD, NS, SS, HS) * Different Topic/Can’t decide (DTORCD): Either one or both of the sentences belong to a topic different than the given one, or you can’t understand one or both sentences. If you choose this option, you need to very briefly explain, why you chose it (e.g.“The second sentence is not grammatical”, “The first sentence is from a different topic” etc.). * No Similarity (NS): The two arguments belong to the same topic, but they don’t show any similarity, i.e. they speak aboutcompletely different aspects of the topic * Some Similarity (SS): The two arguments belong to the same topic, showing semantic similarity on a few aspects, but thecentral message is rather different, or one argument is way less specific than the other * High Similarity (HS): The two arguments belong to the same topic, and they speak about the same aspect, e.g. using different words ### Data Splits The dataset currently does not contain standard data splits. ## Dataset Creation ### Curation Rationale This dataset contains sentence pairs annotated with argument similarity labels that can be used to evaluate argument clustering. ### Source Data #### Initial Data Collection and Normalization The UKP ASPECT corpus consists of sentences which have been identified as arguments for given topics using the ArgumenText system (Stab et al., 2018). The ArgumenText system expects as input an arbitrary topic (query) and searches a large web crawl for relevant documents. Finally, it classifies all sentences contained in the most relevant documents for a given query into pro, con or non-arguments (with regard to the given topic). We picked 28 topics related to currently discussed issues from technology and society. To balance the selection of argument pairs with regard to their similarity, we applied a weak supervision approach. For each of our 28 topics, we applied a sampling strategy that picks randomly two pro or con argument sentences at random, calculates their similarity using the system by Misra et al. (2016), and keeps pairs with a probability aiming to balance diversity across the entire similarity scale. This was repeated until we reached 3,595 arguments pairs, about 130 pairs for each topic. #### Who are the source language producers? Unidentified contributors to the world wide web. ### Annotations #### Annotation process The argument pairs were annotated on a range of three degrees of similarity (no, some, and high similarity) with the help of crowd workers on the Amazon Mechanical Turk platform. To account for unrelated pairs due to the sampling process, crowd workers could choose a fourth option. We collected seven assignments per pair and used Multi-Annotator Competence Estimation (MACE) with a threshold of 1.0 (Hovy et al., 2013) to consolidate votes into a gold standard. #### Who are the annotators? Crowd workers on Amazon Mechanical Turk ### Personal and Sensitive Information This dataset is fully anonymized. ## Additional Information You can download the data via: Please find more information about the code and how the data was collected in the paper. ### Dataset Curators Curation is managed by our data manager at UKP. ### Licensing Information CC-by-NC 3.0 Please cite this data using: ### Contributions Thanks to @buenalaune for adding this dataset. ## Tags annotations_creators: - crowdsourced language: - en language_creators: - found license: - cc-by-nc-3.0 multilinguality: - monolingual pretty_name: UKP ASPECT Corpus size_categories: - 1K<n<10K source_datasets: - original tags: - argument pair - argument similarity task_categories: - text-classification task_ids: - topic-classification - multi-input-text-classification - semantic-similarity-classification
[ "# Dataset Card for UKP ASPECT", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Leaderboard: n/a\n- Point of Contact: data\\[at\\]URL\n- (URL", "### Dataset Summary\n\nThe UKP ASPECT Corpus includes 3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper).\n\nIf you are having problems with downloading the dataset from the huggingface hub, please download it from here.", "### Supported Tasks and Leaderboards\n\nThis dataset supports the following tasks:\n\n* Sentence pair classification\n* Topic classification", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach instance consists of a topic, a pair of sentences, and an argument similarity label.", "### Data Fields\n\n* topic: the topic keywords used to retrieve the documents\n* sentence_1: the first sentence of the pair\n* sentence_2: the second sentence of the pair\n* label: the consolidated crowdsourced gold-standard annotation of the sentence pair (DTORCD, NS, SS, HS)\n * Different Topic/Can’t decide (DTORCD): Either one or \n both of the sentences belong to a topic different than \n the given one, or you can’t understand one or both \n sentences. If you choose this option, you need to very \n briefly explain, why you chose it (e.g.“The second \n sentence is not grammatical”, “The first sentence is\n from a different topic” etc.). \n * No Similarity (NS): The two arguments belong to the \n same topic, but they don’t show any similarity, i.e. \n they speak aboutcompletely different aspects of the topic\n * Some Similarity (SS): The two arguments belong to the \n same topic, showing semantic similarity on a few aspects, \n but thecentral message is rather different, or one \n argument is way less specific than the other\n * High Similarity (HS): The two arguments belong to the \n same topic, and they speak about the same aspect, e.g. \n using different words", "### Data Splits\n\nThe dataset currently does not contain standard data splits.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset contains sentence pairs annotated with argument similarity labels that can be used to evaluate argument clustering.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe UKP ASPECT corpus consists of sentences which have been identified as arguments for given topics using the ArgumenText\nsystem (Stab et al., 2018). The ArgumenText\nsystem expects as input an arbitrary topic (query)\nand searches a large web crawl for relevant documents.\nFinally, it classifies all sentences contained\nin the most relevant documents for a given query\ninto pro, con or non-arguments (with regard to the\ngiven topic).\n\nWe picked 28 topics related to currently discussed issues from technology and society. To balance the selection of argument pairs with regard to their similarity, we applied a weak supervision\napproach. For each of our 28 topics, we applied\na sampling strategy that picks randomly two pro\nor con argument sentences at random, calculates\ntheir similarity using the system by Misra et al.\n(2016), and keeps pairs with a probability aiming to balance diversity across the entire similarity\nscale. This was repeated until we reached 3,595\narguments pairs, about 130 pairs for each topic.", "#### Who are the source language producers?\n\nUnidentified contributors to the world wide web.", "### Annotations", "#### Annotation process\n\nThe argument pairs were annotated on a range\nof three degrees of similarity (no, some, and high\nsimilarity) with the help of crowd workers on\nthe Amazon Mechanical Turk platform. To account for \nunrelated pairs due to the sampling process, \ncrowd workers could choose a fourth option. \nWe collected seven assignments per pair\nand used Multi-Annotator Competence Estimation\n(MACE) with a threshold of 1.0 (Hovy et al.,\n2013) to consolidate votes into a gold standard.", "#### Who are the annotators?\n\nCrowd workers on Amazon Mechanical Turk", "### Personal and Sensitive Information\n\nThis dataset is fully anonymized.", "## Additional Information\n\nYou can download the data via:\n\n \nPlease find more information about the code and how the data was collected in the paper.", "### Dataset Curators\n\nCuration is managed by our data manager at UKP.", "### Licensing Information\n\nCC-by-NC 3.0\n\n\n\nPlease cite this data using:", "### Contributions\n\nThanks to @buenalaune for adding this dataset.", "## Tags\n\nannotations_creators:\n- crowdsourced\n\nlanguage:\n- en\n\nlanguage_creators:\n- found\n\nlicense:\n- cc-by-nc-3.0\n\nmultilinguality:\n- monolingual\n\npretty_name: UKP ASPECT Corpus\n\nsize_categories:\n- 1K<n<10K\n\nsource_datasets:\n- original\n\ntags:\n- argument pair\n- argument similarity\n\ntask_categories:\n- text-classification\n\ntask_ids:\n- topic-classification\n- multi-input-text-classification\n- semantic-similarity-classification" ]
[ "TAGS\n#license-cc-by-nc-3.0 #region-us \n", "# Dataset Card for UKP ASPECT", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL\n- Leaderboard: n/a\n- Point of Contact: data\\[at\\]URL\n- (URL", "### Dataset Summary\n\nThe UKP ASPECT Corpus includes 3,595 sentence pairs over 28 controversial topics. The sentences were crawled from a large web crawl and identified as arguments for a given topic using the ArgumenText system. The sampling and matching of the sentence pairs is described in the paper. Then, the argument similarity annotation was done via crowdsourcing. Each crowd worker could choose from four annotation options (the exact guidelines are provided in the Appendix of the paper).\n\nIf you are having problems with downloading the dataset from the huggingface hub, please download it from here.", "### Supported Tasks and Leaderboards\n\nThis dataset supports the following tasks:\n\n* Sentence pair classification\n* Topic classification", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach instance consists of a topic, a pair of sentences, and an argument similarity label.", "### Data Fields\n\n* topic: the topic keywords used to retrieve the documents\n* sentence_1: the first sentence of the pair\n* sentence_2: the second sentence of the pair\n* label: the consolidated crowdsourced gold-standard annotation of the sentence pair (DTORCD, NS, SS, HS)\n * Different Topic/Can’t decide (DTORCD): Either one or \n both of the sentences belong to a topic different than \n the given one, or you can’t understand one or both \n sentences. If you choose this option, you need to very \n briefly explain, why you chose it (e.g.“The second \n sentence is not grammatical”, “The first sentence is\n from a different topic” etc.). \n * No Similarity (NS): The two arguments belong to the \n same topic, but they don’t show any similarity, i.e. \n they speak aboutcompletely different aspects of the topic\n * Some Similarity (SS): The two arguments belong to the \n same topic, showing semantic similarity on a few aspects, \n but thecentral message is rather different, or one \n argument is way less specific than the other\n * High Similarity (HS): The two arguments belong to the \n same topic, and they speak about the same aspect, e.g. \n using different words", "### Data Splits\n\nThe dataset currently does not contain standard data splits.", "## Dataset Creation", "### Curation Rationale\n\nThis dataset contains sentence pairs annotated with argument similarity labels that can be used to evaluate argument clustering.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe UKP ASPECT corpus consists of sentences which have been identified as arguments for given topics using the ArgumenText\nsystem (Stab et al., 2018). The ArgumenText\nsystem expects as input an arbitrary topic (query)\nand searches a large web crawl for relevant documents.\nFinally, it classifies all sentences contained\nin the most relevant documents for a given query\ninto pro, con or non-arguments (with regard to the\ngiven topic).\n\nWe picked 28 topics related to currently discussed issues from technology and society. To balance the selection of argument pairs with regard to their similarity, we applied a weak supervision\napproach. For each of our 28 topics, we applied\na sampling strategy that picks randomly two pro\nor con argument sentences at random, calculates\ntheir similarity using the system by Misra et al.\n(2016), and keeps pairs with a probability aiming to balance diversity across the entire similarity\nscale. This was repeated until we reached 3,595\narguments pairs, about 130 pairs for each topic.", "#### Who are the source language producers?\n\nUnidentified contributors to the world wide web.", "### Annotations", "#### Annotation process\n\nThe argument pairs were annotated on a range\nof three degrees of similarity (no, some, and high\nsimilarity) with the help of crowd workers on\nthe Amazon Mechanical Turk platform. To account for \nunrelated pairs due to the sampling process, \ncrowd workers could choose a fourth option. \nWe collected seven assignments per pair\nand used Multi-Annotator Competence Estimation\n(MACE) with a threshold of 1.0 (Hovy et al.,\n2013) to consolidate votes into a gold standard.", "#### Who are the annotators?\n\nCrowd workers on Amazon Mechanical Turk", "### Personal and Sensitive Information\n\nThis dataset is fully anonymized.", "## Additional Information\n\nYou can download the data via:\n\n \nPlease find more information about the code and how the data was collected in the paper.", "### Dataset Curators\n\nCuration is managed by our data manager at UKP.", "### Licensing Information\n\nCC-by-NC 3.0\n\n\n\nPlease cite this data using:", "### Contributions\n\nThanks to @buenalaune for adding this dataset.", "## Tags\n\nannotations_creators:\n- crowdsourced\n\nlanguage:\n- en\n\nlanguage_creators:\n- found\n\nlicense:\n- cc-by-nc-3.0\n\nmultilinguality:\n- monolingual\n\npretty_name: UKP ASPECT Corpus\n\nsize_categories:\n- 1K<n<10K\n\nsource_datasets:\n- original\n\ntags:\n- argument pair\n- argument similarity\n\ntask_categories:\n- text-classification\n\ntask_ids:\n- topic-classification\n- multi-input-text-classification\n- semantic-similarity-classification" ]
c2bb89e72da89cf38680d5bb47fe689b0716bfc5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: t5-small * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@Carmen](https://huggingface.co/Carmen) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-0b05dc-15886185
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T09:39:07+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "t5-small", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-06T09:42:21+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: t5-small * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @Carmen for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-small\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Carmen for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: t5-small\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @Carmen for evaluating this model." ]
e0ec01c52f1ebc2be766493eca5f571b4e20474b
# Dataset Card for FaQuAD ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/liafacom/faquad - **Repository:** https://github.com/liafacom/faquad - **Paper:** https://ieeexplore.ieee.org/document/8923668/ <!-- - **Leaderboard:** --> - **Point of Contact:** Eraldo R. Fernandes <[email protected]> ### Dataset Summary Academic secretaries and faculty members of higher education institutions face a common problem: the abundance of questions sent by academics whose answers are found in available institutional documents. The official documents produced by Brazilian public universities are vast and disperse, which discourage students to further search for answers in such sources. In order to lessen this problem, we present FaQuAD: a novel machine reading comprehension dataset in the domain of Brazilian higher education institutions. FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016]. It comprises 900 questions about 249 reading passages (paragraphs), which were taken from 18 official documents of a computer science college from a Brazilian federal university and 21 Wikipedia articles related to Brazilian higher education system. As far as we know, this is the first Portuguese reading comprehension dataset in this format. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields | name |train|validation| |---------|----:|----:| |faquad|837|63| ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
eraldoluis/faquad
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:n<1K", "source_datasets:extended|wikipedia", "language:pt", "license:cc-by-4.0", "region:us" ]
2022-09-06T10:05:01+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["pt"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "FaQuAD", "train-eval-index": [{"config": "plain_text", "task": "question-answering", "task_id": "extractive_question_answering", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "question", "context": "context", "answers": {"text": "text", "answer_start": "answer_start"}}, "metrics": [{"type": "squad", "name": "SQuAD"}]}]}
2023-01-23T08:45:41+00:00
[]
[ "pt" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-extended|wikipedia #language-Portuguese #license-cc-by-4.0 #region-us
Dataset Card for FaQuAD ======================= Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Point of Contact: Eraldo R. Fernandes [eraldoluis@URL](mailto:eraldoluis@URL) ### Dataset Summary Academic secretaries and faculty members of higher education institutions face a common problem: the abundance of questions sent by academics whose answers are found in available institutional documents. The official documents produced by Brazilian public universities are vast and disperse, which discourage students to further search for answers in such sources. In order to lessen this problem, we present FaQuAD: a novel machine reading comprehension dataset in the domain of Brazilian higher education institutions. FaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016]. It comprises 900 questions about 249 reading passages (paragraphs), which were taken from 18 official documents of a computer science college from a Brazilian federal university and 21 Wikipedia articles related to Brazilian higher education system. As far as we know, this is the first Portuguese reading comprehension dataset in this format. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances ### Data Fields ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "### Dataset Summary\n\n\nAcademic secretaries and faculty members of higher education institutions face a common problem:\nthe abundance of questions sent by academics\nwhose answers are found in available institutional documents.\nThe official documents produced by Brazilian public universities are vast and disperse,\nwhich discourage students to further search for answers in such sources.\nIn order to lessen this problem, we present FaQuAD:\na novel machine reading comprehension dataset\nin the domain of Brazilian higher education institutions.\nFaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016].\nIt comprises 900 questions about 249 reading passages (paragraphs),\nwhich were taken from 18 official documents of a computer science college\nfrom a Brazilian federal university\nand 21 Wikipedia articles related to Brazilian higher education system.\nAs far as we know, this is the first Portuguese reading comprehension dataset in this format.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-extended|wikipedia #language-Portuguese #license-cc-by-4.0 #region-us \n", "### Dataset Summary\n\n\nAcademic secretaries and faculty members of higher education institutions face a common problem:\nthe abundance of questions sent by academics\nwhose answers are found in available institutional documents.\nThe official documents produced by Brazilian public universities are vast and disperse,\nwhich discourage students to further search for answers in such sources.\nIn order to lessen this problem, we present FaQuAD:\na novel machine reading comprehension dataset\nin the domain of Brazilian higher education institutions.\nFaQuAD follows the format of SQuAD (Stanford Question Answering Dataset) [Rajpurkar et al. 2016].\nIt comprises 900 questions about 249 reading passages (paragraphs),\nwhich were taken from 18 official documents of a computer science college\nfrom a Brazilian federal university\nand 21 Wikipedia articles related to Brazilian higher education system.\nAs far as we know, this is the first Portuguese reading comprehension dataset in this format.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @github-username for adding this dataset." ]
1e0c6e8c8ff4fe9d22b72ba8abbc408df84eb265
# AutoTrain Dataset for project: emotion-detection ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project emotion-detection. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "feat_tweet_id": 1694457763, "target": 8, "text": "I am going to see how long I can do this for." }, { "feat_tweet_id": 1694627613, "target": 8, "text": "@anitabora yeah, right. What if our politicians start using uploading their pics, lots of inside stories will be out" } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "feat_tweet_id": "Value(dtype='int64', id=None)", "target": "ClassLabel(num_classes=13, names=['anger', 'boredom', 'empty', 'enthusiasm', 'fun', 'happiness', 'hate', 'love', 'neutral', 'relief', 'sadness', 'surprise', 'worry'], id=None)", "text": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 31995 | | valid | 8005 |
rahulmallah/autotrain-data-emotion-detection
[ "task_categories:text-classification", "language:en", "region:us" ]
2022-09-06T12:04:07+00:00
{"language": ["en"], "task_categories": ["text-classification"]}
2022-09-06T12:13:37+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #language-English #region-us
AutoTrain Dataset for project: emotion-detection ================================================ Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project emotion-detection. ### Languages The BCP-47 code for the dataset's language is en. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-text-classification #language-English #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
c9bc2dc442b053e2f70f11cbcf6aa3ee01b54286
label_ids: - (0) contradiction - (2) entailment
gorkaartola/ZS-train_SDG_Descriptions_S1-sentence_S2-SDGtitle_Negative_Sample_Filter-Only_Title_and_Headline
[ "region:us" ]
2022-09-06T13:50:12+00:00
{}
2022-09-06T13:52:46+00:00
[]
[]
TAGS #region-us
label_ids: - (0) contradiction - (2) entailment
[]
[ "TAGS\n#region-us \n" ]
c0afa552316676917fa38974717285d6cb5f133d
git config --global credential.helper store
Riilax/Dali-2
[ "region:us" ]
2022-09-06T16:41:04+00:00
{}
2022-09-06T16:43:51+00:00
[]
[]
TAGS #region-us
git config --global URL store
[]
[ "TAGS\n#region-us \n" ]
f1fed66dfcbbc155f73431e9f2c9362fe2ace7d4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: kamalkraj/bert-base-cased-ner-conll2003 * Dataset: conll2003 * Config: conll2003 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@akdeniz27](https://huggingface.co/akdeniz27) for evaluating this model.
autoevaluate/autoeval-staging-eval-conll2003-conll2003-0054c2-15936187
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T16:51:31+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "kamalkraj/bert-base-cased-ner-conll2003", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-09-06T16:53:00+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: kamalkraj/bert-base-cased-ner-conll2003 * Dataset: conll2003 * Config: conll2003 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @akdeniz27 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: kamalkraj/bert-base-cased-ner-conll2003\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @akdeniz27 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: kamalkraj/bert-base-cased-ner-conll2003\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @akdeniz27 for evaluating this model." ]
209d2db4b4a2ac4b477a184c8d5231fd5d4c81fb
# Dataset Card for "jigsaw-toxic-comment" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
affahrizain/jigsaw-toxic-comment
[ "region:us" ]
2022-09-06T18:36:24+00:00
{"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "comment_clean", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 57080609, "num_examples": 159100}, {"name": "dev", "num_bytes": 7809213, "num_examples": 22393}, {"name": "test", "num_bytes": 22245686, "num_examples": 63978}], "download_size": 13050863, "dataset_size": 87135508}}
2023-02-19T11:51:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for "jigsaw-toxic-comment" More Information needed
[ "# Dataset Card for \"jigsaw-toxic-comment\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"jigsaw-toxic-comment\"\n\nMore Information needed" ]
45970ba9a0fc0f0e7971757228ea1b17d9dd3dfb
Source of data: https://github.com/FudanVI/benchmarking-chinese-text-recognition
priyank-m/chinese_text_recognition
[ "task_categories:image-to-text", "task_ids:image-captioning", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:zh", "ocr", "text-recognition", "chinese", "region:us" ]
2022-09-06T20:18:47+00:00
{"annotations_creators": [], "language_creators": [], "language": ["zh"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "chinese_text_recognition", "tags": ["ocr", "text-recognition", "chinese"]}
2022-09-21T08:08:19+00:00
[]
[ "zh" ]
TAGS #task_categories-image-to-text #task_ids-image-captioning #multilinguality-monolingual #size_categories-100K<n<1M #language-Chinese #ocr #text-recognition #chinese #region-us
Source of data: URL
[]
[ "TAGS\n#task_categories-image-to-text #task_ids-image-captioning #multilinguality-monolingual #size_categories-100K<n<1M #language-Chinese #ocr #text-recognition #chinese #region-us \n" ]
499e407cf6a86f408818969400d1de63163e65a1
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-fcbcd1-15976191
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T21:24:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["rouge", "accuracy", "exact_match"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-06T22:16:06+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @samuelallen123 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
5909507bf7ac0113a0a906b0a5583c8b8e0d4085
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model.
autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-5863f2-15966190
[ "autotrain", "evaluation", "region:us" ]
2022-09-06T21:24:28+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["rouge", "accuracy"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}}
2022-09-06T22:14:30+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum * Dataset: cnn_dailymail * Config: 3.0.0 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @samuelallen123 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @samuelallen123 for evaluating this model." ]
1139ac8154d30113fab374b3961faec562b0dd8f
# Dataset Card for citizen_nlu ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ### Dataset Description - **Homepage**: [NeuralSpace Homepage](https://huggingface.co/neuralspace) - **Repository:** [citizen_nlu Dataset](https://huggingface.co/datasets/neuralspace/citizen_nlu) - **Point of Contact:** [Juhi Jain](mailto:[email protected]) - **Point of Contact:** [Ayushman Dash](mailto:[email protected]) - **Size of downloaded dataset files:** 67.6 MB ### Dataset Summary NeuralSpace strives to provide AutoNLP text and speech services, especially for low-resource languages. One of the major services provided by NeuralSpace on its platform is the “Language Understanding” service, where you can build, train and deploy your NLU model to recognize intents and entities with minimal code and just a few clicks. The initiative of this challenge is created with the purpose of sparkling AI applications to address some of the pressing problems in India and find unique ways to address them. Starting with a focus on NLU, this challenge hopes to make progress towards multilingual modelling, as language diversity is significantly underserved on the web. NeuralSpace aims at mastering the low-resource domain, and the citizen services use case is naturally a multilingual and essential domain for the general citizen. Citizen services refer to the essential services provided by organizations to general citizens. In this case, we focus on important services like various FIR-based requests, Blood/Platelets Donation, and Coronavirus-related queries. Such services may not be needed regularly by any particular city but when needed are of utmost importance, and in general, the needs for such services are prevalent every day. Despite the importance of citizen services, linguistically rich countries like India are still far behind in delivering such essential needs to the citizens with absolute ease. The best services currently available do not exist in various low-resource languages that are native to different groups of people. This challenge aims to make government services more efficient, responsive, and customer-friendly. As our computing resources and modelling capabilities grow, so does our potential to support our citizens by delivering a far superior customer experience. Equipping a Citizen services bot with the ability to converse in vernacular languages would make them accessible to a vast group of people for whom English is not a language of choice, but for who are increasingly turning to digital platforms and interfaces for a wide range of needs and wants. ### Supported Tasks A key component of any chatbot system is the NLU pipeline for ‘Intent Classification’ and ‘Named Entity Recognition. This primarily enables any chatbot to perform various tasks at ease. A fully functional multilingual chatbot needs to be able to decipher the language and understand exactly what the user wants. #### citizen_nlu A manually-curated multilingual dataset by Data Engineers at [NeuralSpace](https://www.neuralspace.ai/) for citizen services in 9 Indian languages for a realistic information-seeking task with data samples written by native-speaking expert data annotators [here](https://www.neuralspace.ai/). The dataset files are available in CSV format. ### Languages The citizen_nlu data is available in nine Indian languages i.e, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil, and Telugu ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 67.6 MB An example of 'test' looks as follows. ``` text,intents मेरे पिता की कार उनके कार्यालय की पार्किंग से कल से गायब है। वाहन संख्या केए-03-एचए-1985 । मैं एफआईआर कराना चाहता हूं।,ReportingMissingVehicle ``` An example of 'train' looks as follows. ```text,intents என் தாத்தா எனக்கு பிறந்தநாள் பரிசு கொடுத்தார் மஞ்சள் நான் டாடனானோவை இழந்தேன். காணவில்லை என புகார் தெரிவிக்க விரும்புகிறேன்,ReportingMissingVehicle ``` ### Data Fields The data fields are the same among all splits. #### citizen_nlu - `text`: a `string` feature. - `intent`: a `string` feature. - `type`: a classification label, with possible values including `train` or `test`. ### Data Splits #### citizen_nlu | |train|test| |----|----:|---:| |citizen_nlu| 287832| 4752| ### Contributions Mehar Bhatia ([email protected])
neuralspace/citizen_nlu
[ "task_categories:question-answering", "task_categories:text-retrieval", "task_categories:text2text-generation", "task_categories:other", "task_categories:translation", "task_categories:conversational", "task_ids:extractive-qa", "task_ids:closed-domain-qa", "task_ids:utterance-retrieval", "task_ids:document-retrieval", "task_ids:open-book-qa", "task_ids:closed-book-qa", "annotations_creators:other", "language_creators:other", "multilinguality:multilingual", "size_categories:n>1K", "source_datasets:original", "language:as", "language:bn", "language:gu", "language:hi", "language:kn", "language:mr", "language:pa", "language:ta", "language:te", "chatbots", "citizen services", "help", "emergency services", "health", "reporting crime", "region:us" ]
2022-09-07T03:43:33+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["as", "bn", "gu", "hi", "kn", "mr", "pa", "ta", "te"], "multilinguality": ["multilingual"], "size_categories": ["n>1K"], "source_datasets": ["original"], "task_categories": ["question-answering", "text-retrieval", "text2text-generation", "other", "translation", "conversational"], "task_ids": ["extractive-qa", "closed-domain-qa", "utterance-retrieval", "document-retrieval", "closed-domain-qa", "open-book-qa", "closed-book-qa"], "paperswithcode_id": "acronym-identification", "pretty_name": "Citizen Services NLU Multilingual Dataset.", "expert-generated license": ["cc-by-nc-sa-4.0"], "tags": ["chatbots", "citizen services", "help", "emergency services", "health", "reporting crime"], "configs": ["citizen_nlu"], "train-eval-index": [{"config": "citizen_nlu", "task": "token-classification", "task_id": "entity_extraction", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"sentence": "text", "label": "target"}, "metrics": [{"type": "citizen_nlu", "name": "citizen_nlu", "config": "citizen_nlu"}]}]}
2022-09-09T04:53:16+00:00
[]
[ "as", "bn", "gu", "hi", "kn", "mr", "pa", "ta", "te" ]
TAGS #task_categories-question-answering #task_categories-text-retrieval #task_categories-text2text-generation #task_categories-other #task_categories-translation #task_categories-conversational #task_ids-extractive-qa #task_ids-closed-domain-qa #task_ids-utterance-retrieval #task_ids-document-retrieval #task_ids-open-book-qa #task_ids-closed-book-qa #annotations_creators-other #language_creators-other #multilinguality-multilingual #size_categories-n>1K #source_datasets-original #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Marathi #language-Panjabi #language-Tamil #language-Telugu #chatbots #citizen services #help #emergency services #health #reporting crime #region-us
Dataset Card for citizen\_nlu ============================= Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions ### Dataset Description * Homepage: NeuralSpace Homepage * Repository: citizen\_nlu Dataset * Point of Contact: Juhi Jain * Point of Contact: Ayushman Dash * Size of downloaded dataset files: 67.6 MB ### Dataset Summary NeuralSpace strives to provide AutoNLP text and speech services, especially for low-resource languages. One of the major services provided by NeuralSpace on its platform is the “Language Understanding” service, where you can build, train and deploy your NLU model to recognize intents and entities with minimal code and just a few clicks. The initiative of this challenge is created with the purpose of sparkling AI applications to address some of the pressing problems in India and find unique ways to address them. Starting with a focus on NLU, this challenge hopes to make progress towards multilingual modelling, as language diversity is significantly underserved on the web. NeuralSpace aims at mastering the low-resource domain, and the citizen services use case is naturally a multilingual and essential domain for the general citizen. Citizen services refer to the essential services provided by organizations to general citizens. In this case, we focus on important services like various FIR-based requests, Blood/Platelets Donation, and Coronavirus-related queries. Such services may not be needed regularly by any particular city but when needed are of utmost importance, and in general, the needs for such services are prevalent every day. Despite the importance of citizen services, linguistically rich countries like India are still far behind in delivering such essential needs to the citizens with absolute ease. The best services currently available do not exist in various low-resource languages that are native to different groups of people. This challenge aims to make government services more efficient, responsive, and customer-friendly. As our computing resources and modelling capabilities grow, so does our potential to support our citizens by delivering a far superior customer experience. Equipping a Citizen services bot with the ability to converse in vernacular languages would make them accessible to a vast group of people for whom English is not a language of choice, but for who are increasingly turning to digital platforms and interfaces for a wide range of needs and wants. ### Supported Tasks A key component of any chatbot system is the NLU pipeline for ‘Intent Classification’ and ‘Named Entity Recognition. This primarily enables any chatbot to perform various tasks at ease. A fully functional multilingual chatbot needs to be able to decipher the language and understand exactly what the user wants. #### citizen\_nlu A manually-curated multilingual dataset by Data Engineers at NeuralSpace for citizen services in 9 Indian languages for a realistic information-seeking task with data samples written by native-speaking expert data annotators here. The dataset files are available in CSV format. ### Languages The citizen\_nlu data is available in nine Indian languages i.e, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil, and Telugu Dataset Structure ----------------- ### Data Instances * Size of downloaded dataset files: 67.6 MB An example of 'test' looks as follows. An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### citizen\_nlu * 'text': a 'string' feature. * 'intent': a 'string' feature. * 'type': a classification label, with possible values including 'train' or 'test'. ### Data Splits #### citizen\_nlu ### Contributions Mehar Bhatia (mehar@URL)
[ "### Dataset Description\n\n\n* Homepage: NeuralSpace Homepage\n* Repository: citizen\\_nlu Dataset\n* Point of Contact: Juhi Jain\n* Point of Contact: Ayushman Dash\n* Size of downloaded dataset files: 67.6 MB", "### Dataset Summary\n\n\nNeuralSpace strives to provide AutoNLP text and speech services, especially for low-resource languages. One of the major services provided by NeuralSpace on its platform is the “Language Understanding” service, where you can build, train and deploy your NLU model to recognize intents and entities with minimal code and just a few clicks.\n\n\nThe initiative of this challenge is created with the purpose of sparkling AI applications to address some of the pressing problems in India and find unique ways to address them. Starting with a focus on NLU, this challenge hopes to make progress towards multilingual modelling, as language diversity is significantly underserved on the web.\n\n\nNeuralSpace aims at mastering the low-resource domain, and the citizen services use case is naturally a multilingual and essential domain for the general citizen.\n\n\nCitizen services refer to the essential services provided by organizations to general citizens. In this case, we focus on important services like various FIR-based requests, Blood/Platelets Donation, and Coronavirus-related queries.\n\n\nSuch services may not be needed regularly by any particular city but when needed are of utmost importance, and in general, the needs for such services are prevalent every day.\n\n\nDespite the importance of citizen services, linguistically rich countries like India are still far behind in delivering such essential needs to the citizens with absolute ease. The best services currently available do not exist in various low-resource languages that are native to different groups of people. This challenge aims to make government services more efficient, responsive, and customer-friendly.\n\n\nAs our computing resources and modelling capabilities grow, so does our potential to support our citizens by delivering a far superior customer experience. Equipping a Citizen services bot with the ability to converse in vernacular languages would make them accessible to a vast group of people for whom English is not a language of choice, but for who are increasingly turning to digital platforms and interfaces for a wide range of needs and wants.", "### Supported Tasks\n\n\nA key component of any chatbot system is the NLU pipeline for ‘Intent Classification’ and ‘Named Entity Recognition. This primarily enables any chatbot to perform various tasks at ease. A fully functional multilingual chatbot needs to be able to decipher the language and understand exactly what the user wants.", "#### citizen\\_nlu\n\n\nA manually-curated multilingual dataset by Data Engineers at NeuralSpace for citizen services in 9 Indian languages for a realistic information-seeking task with data samples written by native-speaking expert data annotators here. The dataset files are available in CSV format.", "### Languages\n\n\nThe citizen\\_nlu data is available in nine Indian languages i.e, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil, and Telugu\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n* Size of downloaded dataset files: 67.6 MB\n\n\nAn example of 'test' looks as follows.\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### citizen\\_nlu\n\n\n* 'text': a 'string' feature.\n* 'intent': a 'string' feature.\n* 'type': a classification label, with possible values including 'train' or 'test'.", "### Data Splits", "#### citizen\\_nlu", "### Contributions\n\n\nMehar Bhatia (mehar@URL)" ]
[ "TAGS\n#task_categories-question-answering #task_categories-text-retrieval #task_categories-text2text-generation #task_categories-other #task_categories-translation #task_categories-conversational #task_ids-extractive-qa #task_ids-closed-domain-qa #task_ids-utterance-retrieval #task_ids-document-retrieval #task_ids-open-book-qa #task_ids-closed-book-qa #annotations_creators-other #language_creators-other #multilinguality-multilingual #size_categories-n>1K #source_datasets-original #language-Assamese #language-Bengali #language-Gujarati #language-Hindi #language-Kannada #language-Marathi #language-Panjabi #language-Tamil #language-Telugu #chatbots #citizen services #help #emergency services #health #reporting crime #region-us \n", "### Dataset Description\n\n\n* Homepage: NeuralSpace Homepage\n* Repository: citizen\\_nlu Dataset\n* Point of Contact: Juhi Jain\n* Point of Contact: Ayushman Dash\n* Size of downloaded dataset files: 67.6 MB", "### Dataset Summary\n\n\nNeuralSpace strives to provide AutoNLP text and speech services, especially for low-resource languages. One of the major services provided by NeuralSpace on its platform is the “Language Understanding” service, where you can build, train and deploy your NLU model to recognize intents and entities with minimal code and just a few clicks.\n\n\nThe initiative of this challenge is created with the purpose of sparkling AI applications to address some of the pressing problems in India and find unique ways to address them. Starting with a focus on NLU, this challenge hopes to make progress towards multilingual modelling, as language diversity is significantly underserved on the web.\n\n\nNeuralSpace aims at mastering the low-resource domain, and the citizen services use case is naturally a multilingual and essential domain for the general citizen.\n\n\nCitizen services refer to the essential services provided by organizations to general citizens. In this case, we focus on important services like various FIR-based requests, Blood/Platelets Donation, and Coronavirus-related queries.\n\n\nSuch services may not be needed regularly by any particular city but when needed are of utmost importance, and in general, the needs for such services are prevalent every day.\n\n\nDespite the importance of citizen services, linguistically rich countries like India are still far behind in delivering such essential needs to the citizens with absolute ease. The best services currently available do not exist in various low-resource languages that are native to different groups of people. This challenge aims to make government services more efficient, responsive, and customer-friendly.\n\n\nAs our computing resources and modelling capabilities grow, so does our potential to support our citizens by delivering a far superior customer experience. Equipping a Citizen services bot with the ability to converse in vernacular languages would make them accessible to a vast group of people for whom English is not a language of choice, but for who are increasingly turning to digital platforms and interfaces for a wide range of needs and wants.", "### Supported Tasks\n\n\nA key component of any chatbot system is the NLU pipeline for ‘Intent Classification’ and ‘Named Entity Recognition. This primarily enables any chatbot to perform various tasks at ease. A fully functional multilingual chatbot needs to be able to decipher the language and understand exactly what the user wants.", "#### citizen\\_nlu\n\n\nA manually-curated multilingual dataset by Data Engineers at NeuralSpace for citizen services in 9 Indian languages for a realistic information-seeking task with data samples written by native-speaking expert data annotators here. The dataset files are available in CSV format.", "### Languages\n\n\nThe citizen\\_nlu data is available in nine Indian languages i.e, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Punjabi, Tamil, and Telugu\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\n* Size of downloaded dataset files: 67.6 MB\n\n\nAn example of 'test' looks as follows.\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### citizen\\_nlu\n\n\n* 'text': a 'string' feature.\n* 'intent': a 'string' feature.\n* 'type': a classification label, with possible values including 'train' or 'test'.", "### Data Splits", "#### citizen\\_nlu", "### Contributions\n\n\nMehar Bhatia (mehar@URL)" ]
542460b9f8fefcc6544fdd06991e3a3d9be2eef3
# AutoTrain Dataset for project: citizen_nlu_bn ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project citizen_nlu_bn. ### Languages The BCP-47 code for the dataset's language is bn. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "\u0997\u09a4 \u09e8 \u09ae\u09be\u09b8 \u0986\u09ae\u09be\u09b0 \u0986\u0997\u09c7 \u0995\u09b0\u09cb \u09a8\u09be \u0986\u09ae\u09bf \u0995\u09a4 \u09a6\u09bf\u09a8 \u09aa\u09b0\u09c7 \u09b0\u0995\u09cd\u09a4 \u09a6\u09bf\u09a4\u09c7 \u09aa\u09be\u09b0\u09bf?", "target": 3 }, { "text": "\u09b9\u09a0\u09be\u09ce \u0986\u09ae\u09bf \u09a6\u09cb\u0995\u09be\u09a8\u09c7 \u09af\u09be\u0993\u09af\u09bc\u09be\u09b0 \u099c\u09a8\u09cd\u09af \u098f\u0995\u099f\u09bf \u0996\u09be\u09b2\u09bf \u09b0\u09be\u09b8\u09cd\u09a4\u09be\u09af\u09bc \u09b9\u09be\u0981\u099f\u099b\u09bf\u09b2\u09be\u09ae \u09b8\u09be\u09a6\u09be \u09b0\u0999\u09c7\u09b0 \u0993\u09ac\u09bf 005639 \u0986\u09ae\u09bf \u09b0\u09bf\u09aa\u09cb\u09b0\u09cd\u099f \u0995\u09b0\u09ac \u09af\u0996\u09a8 \u0986\u09ae\u09bf \u09a4\u09be\u09b0 \u0995\u09be\u099b\u09c7 \u0986\u09b8\u09ac \u098f\u09ac\u0982 \u09a7\u09be\u0995\u09cd\u0995\u09be \u09a6\u09bf\u09af\u09bc\u09c7 \u099a\u09b2\u09c7 \u09af\u09be\u09ac", "target": 44 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "text": "Value(dtype='string', id=None)", "target": "ClassLabel(num_classes=55, names=['ContactRealPerson', 'Eligibility For BloodDonationWithComorbidities', 'EligibilityForBloodDonationAgeLimit', 'EligibilityForBloodDonationCovidGap', 'EligibilityForBloodDonationForPregnantWomen', 'EligibilityForBloodDonationGap', 'EligibilityForBloodDonationSTD', 'EligibilityForBloodReceiversBloodGroup', 'EligitbilityForVaccine', 'InquiryForCovidActiveCasesCount', 'InquiryForCovidDeathCount', 'InquiryForCovidPrevention', 'InquiryForCovidRecentCasesCount', 'InquiryForCovidTotalCasesCount', 'InquiryForDoctorConsultation', 'InquiryForQuarantinePeriod', 'InquiryForTravelRestrictions', 'InquiryForVaccinationRequirements', 'InquiryForVaccineCost', 'InquiryForVaccineCount', 'InquiryOfContact', 'InquiryOfCovidSymptoms', 'InquiryOfEmergencyContact', 'InquiryOfLocation', 'InquiryOfLockdownDetails', 'InquiryOfTiming', 'InquiryofBloodDonationRequirements', 'InquiryofBloodReceivalRequirements', 'InquiryofPostBloodDonationCareSchemes', 'InquiryofPostBloodDonationCertificate', 'InquiryofPostBloodDonationEffects', 'InquiryofPostBloodReceivalCareSchemes', 'InquiryofPostBloodReceivalEffects', 'InquiryofVaccinationAgeLimit', 'IntentForBloodDonationAppointment', 'IntentForBloodReceivalAppointment', 'ReportingAnimalAbuse', 'ReportingAnimalPoaching', 'ReportingChildAbuse', 'ReportingCyberCrime', 'ReportingDomesticViolence', 'ReportingDowry', 'ReportingDrugConsumption', 'ReportingDrugTrafficing', 'ReportingHitAndRun', 'ReportingMissingPerson', 'ReportingMissingPets', 'ReportingMissingVehicle', 'ReportingMurder', 'ReportingPropertyTakeOver', 'ReportingSexualAssault', 'ReportingTheft', 'ReportingTresspassing', 'ReportingVehicleAccident', 'StatusOfFIR'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 27146 | | valid | 6800 |
neuralspace/autotrain-data-citizen_nlu_bn
[ "task_categories:text-classification", "language:bn", "region:us" ]
2022-09-07T04:31:08+00:00
{"language": ["bn"], "task_categories": ["text-classification"]}
2022-09-07T04:32:14+00:00
[]
[ "bn" ]
TAGS #task_categories-text-classification #language-Bengali #region-us
AutoTrain Dataset for project: citizen\_nlu\_bn =============================================== Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project citizen\_nlu\_bn. ### Languages The BCP-47 code for the dataset's language is bn. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is bn.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-text-classification #language-Bengali #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is bn.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
a77ffb4773b694d03c805d80ea128b44e5c709f3
# Dataset Card for solar3 ### Dataset Summary Šolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the document available at https://www.clarin.si/repository/xmlui/bitstream/handle/11356/1589/Smernice-za-oznacevanje-korpusa-Solar_V1.1.pdf (in Slovenian). \(*) pronounce "š" as "sh" in "shoe". By default the dataset is provided at **sentence-level** (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences. There is also an option to aggregate the instances at the **document-level** or **paragraph-level** by explicitly providing the correct config: ``` datasets.load_dataset("cjvt/solar3", "paragraph_level")` datasets.load_dataset("cjvt/solar3", "document_level")` ``` ### Supported Tasks and Leaderboards Error correction, e.g., at token/sequence level, as token/sequence classification or text2text generation. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: ```json { 'id_doc': 'solar1', 'doc_title': 'KUS-G-slo-1-GO-E-2009-10001', 'is_manually_validated': True, 'src_tokens': ['”', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', '”', ',', 'izreče', 'Antigona', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'], 'src_ling_annotations': { # truncated for conciseness 'lemma': ['”', 'ne', 'da', 'sovražiti', ...], 'ana': ['mte:U', 'mte:L', 'mte:Vd', ...], 'msd': ['UPosTag=PUNCT', 'UPosTag=PART|Polarity=Neg', 'UPosTag=SCONJ', ...], 'ne_tag': [..., 'O', 'B-PER', 'O', ...], 'space_after': [False, True, True, False, ...] }, 'tgt_tokens': ['„', 'Ne', 'da', 'sovražim', ',', 'da', 'ljubim', 'sem', 'na', 'svetu', ',', '”', 'izreče', 'Antigona', 'sebi', 'v', 'bran', 'kralju', 'Kreonu', 'za', 'svoje', 'nasprotno', 'mišljenje', 'pred', 'smrtjo', '.'], # omitted for conciseness, the format is the same as in 'src_ling_annotations' 'tgt_ling_annotations': {...}, 'corrections': [ {'idx_src': [0], 'idx_tgt': [0], 'corr_types': ['Z/LOČ/nerazvrščeno']}, {'idx_src': [10, 11], 'idx_tgt': [10, 11], 'corr_types': ['Z/LOČ/nerazvrščeno']}, {'idx_src': [], 'idx_tgt': [14], 'corr_types': ['O/KAT/povratnost']} ] } ``` The instance represents a correction in the document 'solar1' (`id_doc`), which were manually assigned/validated (`is_manually_validated`). More concretely, the source sentence contains three errors (as indicated by three elements in `corrections`): - a punctuation change: '”' -> '„'; - a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside); - addition of a new word: 'sebi'. ### Data Fields - `id_doc`: a string containing the identifying name of the document in which the sentence appears; - `doc_title`: a string containing the assigned document title; - `is_manually_validated`: a bool indicating whether the document in which the sentence appears was reviewed by a teacher; - `src_tokens`: words in the source sentence (`[]` if there is no source sentence); - `src_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the source tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token; - `tgt_tokens`: words in the target sentence (`[]` if there is no target sentence); - `tgt_ling_annotations`: a dict containing the lemmas (key `"lemma"`), morphosyntactic descriptions using UD (key `"msd"`) and JOS/MULTEXT-East (key `"ana"`) specification, named entity tags encoded using IOB2 (key `"ne_tag"`) for the target tokens (**automatically annotated**), and spacing information (key `"space_after"`), i.e. whether there is a whitespace after each token; - `corrections`: a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved (`idx_src`), target tokens involved (`idx_tgt`), and the categories of the corrections made (`corr_types`). Please note that there can be multiple assigned categories for one annotated correction, in which case `len(corr_types) > 1`. ## Dataset Creation The Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc. Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text. ## Additional Information ### Dataset Curators Špela Arhar Holdt; et al. (please see http://hdl.handle.net/11356/1589 for the full list) ### Licensing Information CC BY-NC-SA 4.0. ### Citation Information ``` @misc{solar3, title = {Developmental corpus {\v S}olar 3.0}, author = {Arhar Holdt, {\v S}pela and Rozman, Tadeja and Stritar Ku{\v c}uk, Mojca and Krek, Simon and Krap{\v s} Vodopivec, Irena and Stabej, Marko and Pori, Eva and Goli, Teja and Lavri{\v c}, Polona and Laskowski, Cyprian and Kocjan{\v c}i{\v c}, Polonca and Klemenc, Bojan and Krsnik, Luka and Kosem, Iztok}, url = {http://hdl.handle.net/11356/1589}, note = {Slovenian language resource repository {CLARIN}.{SI}}, year = {2022} } ``` ### Contributions Thanks to [@matejklemen](https://github.com/matejklemen) for adding this dataset.
cjvt/solar3
[ "task_categories:text2text-generation", "task_categories:other", "annotations_creators:expert-generated", "language_creators:other", "multilinguality:monolingual", "size_categories:100K<n<1M", "size_categories:1K<n<10K", "source_datasets:original", "language:sl", "license:cc-by-nc-sa-4.0", "grammatical-error-correction", "other-token-classification-of-text-errors", "region:us" ]
2022-09-07T08:16:23+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["sl"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "other"], "task_ids": [], "pretty_name": "solar3", "tags": ["grammatical-error-correction", "other-token-classification-of-text-errors"]}
2022-10-21T06:35:45+00:00
[]
[ "sl" ]
TAGS #task_categories-text2text-generation #task_categories-other #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1K<n<10K #source_datasets-original #language-Slovenian #license-cc-by-nc-sa-4.0 #grammatical-error-correction #other-token-classification-of-text-errors #region-us
# Dataset Card for solar3 ### Dataset Summary Šolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the document available at URL (in Slovenian). \(*) pronounce "š" as "sh" in "shoe". By default the dataset is provided at sentence-level (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences. There is also an option to aggregate the instances at the document-level or paragraph-level by explicitly providing the correct config: ### Supported Tasks and Leaderboards Error correction, e.g., at token/sequence level, as token/sequence classification or text2text generation. ### Languages Slovenian. ## Dataset Structure ### Data Instances A sample instance from the dataset: The instance represents a correction in the document 'solar1' ('id_doc'), which were manually assigned/validated ('is_manually_validated'). More concretely, the source sentence contains three errors (as indicated by three elements in 'corrections'): - a punctuation change: '”' -> '„'; - a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside); - addition of a new word: 'sebi'. ### Data Fields - 'id_doc': a string containing the identifying name of the document in which the sentence appears; - 'doc_title': a string containing the assigned document title; - 'is_manually_validated': a bool indicating whether the document in which the sentence appears was reviewed by a teacher; - 'src_tokens': words in the source sentence ('[]' if there is no source sentence); - 'src_ling_annotations': a dict containing the lemmas (key '"lemma"'), morphosyntactic descriptions using UD (key '"msd"') and JOS/MULTEXT-East (key '"ana"') specification, named entity tags encoded using IOB2 (key '"ne_tag"') for the source tokens (automatically annotated), and spacing information (key '"space_after"'), i.e. whether there is a whitespace after each token; - 'tgt_tokens': words in the target sentence ('[]' if there is no target sentence); - 'tgt_ling_annotations': a dict containing the lemmas (key '"lemma"'), morphosyntactic descriptions using UD (key '"msd"') and JOS/MULTEXT-East (key '"ana"') specification, named entity tags encoded using IOB2 (key '"ne_tag"') for the target tokens (automatically annotated), and spacing information (key '"space_after"'), i.e. whether there is a whitespace after each token; - 'corrections': a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved ('idx_src'), target tokens involved ('idx_tgt'), and the categories of the corrections made ('corr_types'). Please note that there can be multiple assigned categories for one annotated correction, in which case 'len(corr_types) > 1'. ## Dataset Creation The Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc. Part of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text. ## Additional Information ### Dataset Curators Špela Arhar Holdt; et al. (please see URL for the full list) ### Licensing Information CC BY-NC-SA 4.0. ### Contributions Thanks to @matejklemen for adding this dataset.
[ "# Dataset Card for solar3", "### Dataset Summary\n\nŠolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools \n(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. \nPart of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the \ndocument available at URL (in Slovenian).\n\n\\(*) pronounce \"š\" as \"sh\" in \"shoe\".\n\nBy default the dataset is provided at sentence-level (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences.\n\nThere is also an option to aggregate the instances at the document-level or paragraph-level \nby explicitly providing the correct config:", "### Supported Tasks and Leaderboards\n\nError correction, e.g., at token/sequence level, as token/sequence classification or text2text generation.", "### Languages\n\nSlovenian.", "## Dataset Structure", "### Data Instances\n\nA sample instance from the dataset:\n\n\nThe instance represents a correction in the document 'solar1' ('id_doc'), which were manually assigned/validated ('is_manually_validated'). More concretely, the source sentence contains three errors (as indicated by three elements in 'corrections'): \n- a punctuation change: '”' -> '„'; \n- a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside);\n- addition of a new word: 'sebi'.", "### Data Fields\n\n- 'id_doc': a string containing the identifying name of the document in which the sentence appears; \n- 'doc_title': a string containing the assigned document title; \n- 'is_manually_validated': a bool indicating whether the document in which the sentence appears was reviewed by a teacher; \n- 'src_tokens': words in the source sentence ('[]' if there is no source sentence); \n- 'src_ling_annotations': a dict containing the lemmas (key '\"lemma\"'), morphosyntactic descriptions using UD (key '\"msd\"') and JOS/MULTEXT-East (key '\"ana\"') specification, named entity tags encoded using IOB2 (key '\"ne_tag\"') for the source tokens (automatically annotated), and spacing information (key '\"space_after\"'), i.e. whether there is a whitespace after each token; \n- 'tgt_tokens': words in the target sentence ('[]' if there is no target sentence); \n- 'tgt_ling_annotations': a dict containing the lemmas (key '\"lemma\"'), morphosyntactic descriptions using UD (key '\"msd\"') and JOS/MULTEXT-East (key '\"ana\"') specification, named entity tags encoded using IOB2 (key '\"ne_tag\"') for the target tokens (automatically annotated), and spacing information (key '\"space_after\"'), i.e. whether there is a whitespace after each token;\n- 'corrections': a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved ('idx_src'), target tokens involved ('idx_tgt'), and the categories of the corrections made ('corr_types'). Please note that there can be multiple assigned categories for one annotated correction, in which case 'len(corr_types) > 1'.", "## Dataset Creation\n\nThe Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc.\n\nPart of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text.", "## Additional Information", "### Dataset Curators\n\nŠpela Arhar Holdt; et al. (please see URL for the full list)", "### Licensing Information\n\nCC BY-NC-SA 4.0.", "### Contributions\n\nThanks to @matejklemen for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-other #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-1K<n<10K #source_datasets-original #language-Slovenian #license-cc-by-nc-sa-4.0 #grammatical-error-correction #other-token-classification-of-text-errors #region-us \n", "# Dataset Card for solar3", "### Dataset Summary\n\nŠolar* is a developmental corpus of 5485 school texts (e.g., essays), written by students in Slovenian secondary schools \n(age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. \nPart of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the \ndocument available at URL (in Slovenian).\n\n\\(*) pronounce \"š\" as \"sh\" in \"shoe\".\n\nBy default the dataset is provided at sentence-level (125867 instances): each instance contains a source (the original) and a target (the corrected) sentence. Note that either the source or the target sentence in an instance may be missing - this usually happens when a source sentence is marked as redundant or when a new sentence is added by the teacher. Additionally, a source or a target sentence may appear in multiple instances - for example, this happens when one sentence gets divided into multiple sentences.\n\nThere is also an option to aggregate the instances at the document-level or paragraph-level \nby explicitly providing the correct config:", "### Supported Tasks and Leaderboards\n\nError correction, e.g., at token/sequence level, as token/sequence classification or text2text generation.", "### Languages\n\nSlovenian.", "## Dataset Structure", "### Data Instances\n\nA sample instance from the dataset:\n\n\nThe instance represents a correction in the document 'solar1' ('id_doc'), which were manually assigned/validated ('is_manually_validated'). More concretely, the source sentence contains three errors (as indicated by three elements in 'corrections'): \n- a punctuation change: '”' -> '„'; \n- a punctuation change: ['”', ','] -> [',', '”'] (i.e. comma inside the quote, not outside);\n- addition of a new word: 'sebi'.", "### Data Fields\n\n- 'id_doc': a string containing the identifying name of the document in which the sentence appears; \n- 'doc_title': a string containing the assigned document title; \n- 'is_manually_validated': a bool indicating whether the document in which the sentence appears was reviewed by a teacher; \n- 'src_tokens': words in the source sentence ('[]' if there is no source sentence); \n- 'src_ling_annotations': a dict containing the lemmas (key '\"lemma\"'), morphosyntactic descriptions using UD (key '\"msd\"') and JOS/MULTEXT-East (key '\"ana\"') specification, named entity tags encoded using IOB2 (key '\"ne_tag\"') for the source tokens (automatically annotated), and spacing information (key '\"space_after\"'), i.e. whether there is a whitespace after each token; \n- 'tgt_tokens': words in the target sentence ('[]' if there is no target sentence); \n- 'tgt_ling_annotations': a dict containing the lemmas (key '\"lemma\"'), morphosyntactic descriptions using UD (key '\"msd\"') and JOS/MULTEXT-East (key '\"ana\"') specification, named entity tags encoded using IOB2 (key '\"ne_tag\"') for the target tokens (automatically annotated), and spacing information (key '\"space_after\"'), i.e. whether there is a whitespace after each token;\n- 'corrections': a list of the corrections, with each correction represented with a dictionary, containing the indices of the source tokens involved ('idx_src'), target tokens involved ('idx_tgt'), and the categories of the corrections made ('corr_types'). Please note that there can be multiple assigned categories for one annotated correction, in which case 'len(corr_types) > 1'.", "## Dataset Creation\n\nThe Developmental corpus Šolar consists of 5,485 texts written by students in Slovenian secondary schools (age 15-19) and pupils in the 7th-9th grade of primary school (13-15), with a small percentage also from the 6th grade. The information on school (elementary or secondary), subject, level (grade or year), type of text, region, and date of production is provided for each text. School essays form the majority of the corpus while other material includes texts created during lessons, such as text recapitulations or descriptions, examples of formal applications, etc.\n\nPart of the corpus (1516 texts) is annotated with teachers' corrections using a system of labels described in the attached document (in Slovenian). Teacher corrections were part of the original files and reflect real classroom situations of essay marking. Corrections were then inserted into texts by annotators and subsequently categorized. Due to the annotations being gathered in a practical (i.e. classroom) setting, only the most relevant errors may sometimes be annotated, e.g., not all incorrectly placed commas are annotated if there is a bigger issue in the text.", "## Additional Information", "### Dataset Curators\n\nŠpela Arhar Holdt; et al. (please see URL for the full list)", "### Licensing Information\n\nCC BY-NC-SA 4.0.", "### Contributions\n\nThanks to @matejklemen for adding this dataset." ]
aa18c10ce999c806bf6f30a050b0d9a720ccd0c3
**Published**: September 21th, 2022 <br> **Author**: Julius Breiholz # GARFAB-Dataset The (G)erman corpus of annotated (A)pp (R)eviews to detect (F)eature requests (A)nd (B)ug reports (GARFAB) is a dataset to fine-tune models regarding classification of app reviews (ASRs) into "Feature Requests", "Bug Reports" and "Irrelevants" for the German language. All ASRs were collected from the Google Play Store and were classified manually by two independent annotators. A weighted and a full version are published with the following distributions of ASRs: | | Feature Request | Bug Reports | Irrelevant | Total | | --- | --- | --- | --- | --- | full | 345 | 387 | 2212 | 2944 | weighted | 345 | 345 | 345 | 1035 |
julius-br/GARFAB
[ "license:mit", "region:us" ]
2022-09-07T10:33:31+00:00
{"license": "mit"}
2022-09-21T14:54:55+00:00
[]
[]
TAGS #license-mit #region-us
Published: September 21th, 2022 Author: Julius Breiholz GARFAB-Dataset ============== The (G)erman corpus of annotated (A)pp (R)eviews to detect (F)eature requests (A)nd (B)ug reports (GARFAB) is a dataset to fine-tune models regarding classification of app reviews (ASRs) into "Feature Requests", "Bug Reports" and "Irrelevants" for the German language. All ASRs were collected from the Google Play Store and were classified manually by two independent annotators. A weighted and a full version are published with the following distributions of ASRs:
[]
[ "TAGS\n#license-mit #region-us \n" ]
85f90b5212cc669b29aac223f6e7a97e82da95c9
# Reddit Demo dataset
jamescalam/reddit-demo
[ "region:us" ]
2022-09-07T10:57:04+00:00
{}
2022-09-07T11:12:43+00:00
[]
[]
TAGS #region-us
# Reddit Demo dataset
[ "# Reddit Demo dataset" ]
[ "TAGS\n#region-us \n", "# Reddit Demo dataset" ]
b514058e84ca638776d8b92786dc41a343aafdbf
;oertjh
helliun/mePics
[ "region:us" ]
2022-09-07T12:36:53+00:00
{}
2022-09-07T13:33:55+00:00
[]
[]
TAGS #region-us
;oertjh
[]
[ "TAGS\n#region-us \n" ]
11d59a59eeee7591bd6e8fe2611be016e9f15f22
Read this [BLOG](https://neuralmagic.com/blog/classifying-finance-tweets-in-real-time-with-sparse-transformers/) to see how I fine-tuned a sparse transformer on this dataset. ### Dataset Description The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their topic. 1. The dataset holds 21,107 documents annotated with 20 labels: ```python topics = { "LABEL_0": "Analyst Update", "LABEL_1": "Fed | Central Banks", "LABEL_2": "Company | Product News", "LABEL_3": "Treasuries | Corporate Debt", "LABEL_4": "Dividend", "LABEL_5": "Earnings", "LABEL_6": "Energy | Oil", "LABEL_7": "Financials", "LABEL_8": "Currencies", "LABEL_9": "General News | Opinion", "LABEL_10": "Gold | Metals | Materials", "LABEL_11": "IPO", "LABEL_12": "Legal | Regulation", "LABEL_13": "M&A | Investments", "LABEL_14": "Macro", "LABEL_15": "Markets", "LABEL_16": "Politics", "LABEL_17": "Personnel Change", "LABEL_18": "Stock Commentary", "LABEL_19": "Stock Movement", } ``` The data was collected using the Twitter API. The current dataset supports the multi-class classification task. ### Task: Topic Classification # Data Splits There are 2 splits: train and validation. Below are the statistics: | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 16,990 | | Validation | 4,118 | # Licensing Information The Twitter Financial Dataset (topic) version 1.0.0 is released under the MIT License.
zeroshot/twitter-financial-news-topic
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:other", "language_creators:other", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:mit", "twitter", "finance", "markets", "stocks", "wallstreet", "quant", "hedgefunds", "region:us" ]
2022-09-07T17:43:21+00:00
{"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "twitter financial news", "tags": ["twitter", "finance", "markets", "stocks", "wallstreet", "quant", "hedgefunds", "markets"]}
2022-12-04T16:50:10+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #twitter #finance #markets #stocks #wallstreet #quant #hedgefunds #region-us
Read this BLOG to see how I fine-tuned a sparse transformer on this dataset. ### Dataset Description The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their topic. 1. The dataset holds 21,107 documents annotated with 20 labels: The data was collected using the Twitter API. The current dataset supports the multi-class classification task. ### Task: Topic Classification Data Splits =========== There are 2 splits: train and validation. Below are the statistics: Licensing Information ===================== The Twitter Financial Dataset (topic) version 1.0.0 is released under the MIT License.
[ "### Dataset Description\n\n\nThe Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their topic.\n\n\n1. The dataset holds 21,107 documents annotated with 20 labels:\n\n\nThe data was collected using the Twitter API. The current dataset supports the multi-class classification task.", "### Task: Topic Classification\n\n\nData Splits\n===========\n\n\nThere are 2 splits: train and validation. Below are the statistics:\n\n\n\nLicensing Information\n=====================\n\n\nThe Twitter Financial Dataset (topic) version 1.0.0 is released under the MIT License." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #twitter #finance #markets #stocks #wallstreet #quant #hedgefunds #region-us \n", "### Dataset Description\n\n\nThe Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their topic.\n\n\n1. The dataset holds 21,107 documents annotated with 20 labels:\n\n\nThe data was collected using the Twitter API. The current dataset supports the multi-class classification task.", "### Task: Topic Classification\n\n\nData Splits\n===========\n\n\nThere are 2 splits: train and validation. Below are the statistics:\n\n\n\nLicensing Information\n=====================\n\n\nThe Twitter Financial Dataset (topic) version 1.0.0 is released under the MIT License." ]
baa2e9a0a5d19ff2838e9cfbceb85b81d7a06f8e
# Dataset Card for Law Stack Exchange Dataset ## Dataset Description - **Paper: [Parameter-Efficient Legal Domain Adaptation](https://aclanthology.org/2022.nllp-1.10/)** - **Point of Contact: [email protected]** ### Dataset Summary Dataset from the Law Stack Exchange, as used in "Parameter-Efficient Legal Domain Adaptation". ### Citation Information ``` @inproceedings{li-etal-2022-parameter, title = "Parameter-Efficient Legal Domain Adaptation", author = "Li, Jonathan and Bhambhoria, Rohan and Zhu, Xiaodan", booktitle = "Proceedings of the Natural Legal Language Processing Workshop 2022", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates (Hybrid)", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.nllp-1.10", pages = "119--129", } ```
jonathanli/law-stack-exchange
[ "task_categories:text-classification", "language:en", "stackexchange", "law", "region:us" ]
2022-09-07T18:49:21+00:00
{"language": ["en"], "task_categories": ["text-classification"], "pretty_name": "Law Stack Exchange", "tags": ["stackexchange", "law"]}
2023-02-23T16:37:19+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #language-English #stackexchange #law #region-us
# Dataset Card for Law Stack Exchange Dataset ## Dataset Description - Paper: Parameter-Efficient Legal Domain Adaptation - Point of Contact: jxl@URL ### Dataset Summary Dataset from the Law Stack Exchange, as used in "Parameter-Efficient Legal Domain Adaptation".
[ "# Dataset Card for Law Stack Exchange Dataset", "## Dataset Description\n\n- Paper: Parameter-Efficient Legal Domain Adaptation \n- Point of Contact: jxl@URL", "### Dataset Summary\n\nDataset from the Law Stack Exchange, as used in \"Parameter-Efficient Legal Domain Adaptation\"." ]
[ "TAGS\n#task_categories-text-classification #language-English #stackexchange #law #region-us \n", "# Dataset Card for Law Stack Exchange Dataset", "## Dataset Description\n\n- Paper: Parameter-Efficient Legal Domain Adaptation \n- Point of Contact: jxl@URL", "### Dataset Summary\n\nDataset from the Law Stack Exchange, as used in \"Parameter-Efficient Legal Domain Adaptation\"." ]
5873a8aa4a5b3b4010501de70241f853acbbadc0
# Dataset Card for US Accidents (2016 - 2021) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/sobhanmoosavi/us-accidents - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Description This is a countrywide car accident dataset, which covers __49 states of the USA__. The accident data are collected from __February 2016 to Dec 2021__, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. Currently, there are about __2.8 million__ accident records in this dataset. Check [here](https://smoosavi.org/datasets/us_accidents) to learn more about this dataset. ### Acknowledgements Please cite the following papers if you use this dataset: - Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, and Rajiv Ramnath. “[A Countrywide Traffic Accident Dataset](https://arxiv.org/abs/1906.05409).”, 2019. - Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, Radu Teodorescu, and Rajiv Ramnath. ["Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights."](https://arxiv.org/abs/1909.09638) In proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2019. ### Content This dataset has been collected in real-time, using multiple Traffic APIs. Currently, it contains accident data that are collected from February 2016 to Dec 2021 for the Contiguous United States. Check [here](https://smoosavi.org/datasets/us_accidents) to learn more about this dataset. ### Inspiration US-Accidents can be used for numerous applications such as real-time car accident prediction, studying car accidents hotspot locations, casualty analysis and extracting cause and effect rules to predict car accidents, and studying the impact of precipitation or other environmental stimuli on accident occurrence. The most recent release of the dataset can also be useful to study the impact of COVID-19 on traffic behavior and accidents. ### Usage Policy and Legal Disclaimer This dataset is being distributed only for __Research__ purposes, under Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0). By clicking on download button(s) below, you are agreeing to use this data only for non-commercial, research, or academic applications. You may need to cite the above papers if you use this dataset. ### Inquiries or need help? For any inquiries, contact me at [email protected] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@sobhanmoosavi](https://kaggle.com/sobhanmoosavi) ### Licensing Information The license for this dataset is cc-by-nc-sa-4.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/us-accidents
[ "license:cc-by-nc-sa-4.0", "arxiv:1906.05409", "arxiv:1909.09638", "region:us" ]
2022-09-07T21:24:31+00:00
{"license": ["cc-by-nc-sa-4.0"], "kaggle_id": "sobhanmoosavi/us-accidents"}
2022-09-07T21:24:52+00:00
[ "1906.05409", "1909.09638" ]
[]
TAGS #license-cc-by-nc-sa-4.0 #arxiv-1906.05409 #arxiv-1909.09638 #region-us
# Dataset Card for US Accidents (2016 - 2021) ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Description This is a countrywide car accident dataset, which covers __49 states of the USA__. The accident data are collected from __February 2016 to Dec 2021__, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. Currently, there are about __2.8 million__ accident records in this dataset. Check here to learn more about this dataset. ### Acknowledgements Please cite the following papers if you use this dataset: - Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, and Rajiv Ramnath. “A Countrywide Traffic Accident Dataset.”, 2019. - Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, Radu Teodorescu, and Rajiv Ramnath. "Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights." In proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2019. ### Content This dataset has been collected in real-time, using multiple Traffic APIs. Currently, it contains accident data that are collected from February 2016 to Dec 2021 for the Contiguous United States. Check here to learn more about this dataset. ### Inspiration US-Accidents can be used for numerous applications such as real-time car accident prediction, studying car accidents hotspot locations, casualty analysis and extracting cause and effect rules to predict car accidents, and studying the impact of precipitation or other environmental stimuli on accident occurrence. The most recent release of the dataset can also be useful to study the impact of COVID-19 on traffic behavior and accidents. ### Usage Policy and Legal Disclaimer This dataset is being distributed only for __Research__ purposes, under Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0). By clicking on download button(s) below, you are agreeing to use this data only for non-commercial, research, or academic applications. You may need to cite the above papers if you use this dataset. ### Inquiries or need help? For any inquiries, contact me at moosavi.3@URL ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @sobhanmoosavi ### Licensing Information The license for this dataset is cc-by-nc-sa-4.0 ### Contributions
[ "# Dataset Card for US Accidents (2016 - 2021)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Description\nThis is a countrywide car accident dataset, which covers __49 states of the USA__. The accident data are collected from __February 2016 to Dec 2021__, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. Currently, there are about __2.8 million__ accident records in this dataset. Check here to learn more about this dataset.", "### Acknowledgements\nPlease cite the following papers if you use this dataset: \n\n- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, and Rajiv Ramnath. “A Countrywide Traffic Accident Dataset.”, 2019.\n\n- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, Radu Teodorescu, and Rajiv Ramnath. \"Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights.\" In proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2019.", "### Content\nThis dataset has been collected in real-time, using multiple Traffic APIs. Currently, it contains accident data that are collected from February 2016 to Dec 2021 for the Contiguous United States. Check here to learn more about this dataset.", "### Inspiration\nUS-Accidents can be used for numerous applications such as real-time car accident prediction, studying car accidents hotspot locations, casualty analysis and extracting cause and effect rules to predict car accidents, and studying the impact of precipitation or other environmental stimuli on accident occurrence. The most recent release of the dataset can also be useful to study the impact of COVID-19 on traffic behavior and accidents.", "### Usage Policy and Legal Disclaimer\nThis dataset is being distributed only for __Research__ purposes, under Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0). By clicking on download button(s) below, you are agreeing to use this data only for non-commercial, research, or academic applications. You may need to cite the above papers if you use this dataset.", "### Inquiries or need help?\nFor any inquiries, contact me at moosavi.3@URL", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @sobhanmoosavi", "### Licensing Information\n\nThe license for this dataset is cc-by-nc-sa-4.0", "### Contributions" ]
[ "TAGS\n#license-cc-by-nc-sa-4.0 #arxiv-1906.05409 #arxiv-1909.09638 #region-us \n", "# Dataset Card for US Accidents (2016 - 2021)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Description\nThis is a countrywide car accident dataset, which covers __49 states of the USA__. The accident data are collected from __February 2016 to Dec 2021__, using multiple APIs that provide streaming traffic incident (or event) data. These APIs broadcast traffic data captured by a variety of entities, such as the US and state departments of transportation, law enforcement agencies, traffic cameras, and traffic sensors within the road-networks. Currently, there are about __2.8 million__ accident records in this dataset. Check here to learn more about this dataset.", "### Acknowledgements\nPlease cite the following papers if you use this dataset: \n\n- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, and Rajiv Ramnath. “A Countrywide Traffic Accident Dataset.”, 2019.\n\n- Moosavi, Sobhan, Mohammad Hossein Samavatian, Srinivasan Parthasarathy, Radu Teodorescu, and Rajiv Ramnath. \"Accident Risk Prediction based on Heterogeneous Sparse Data: New Dataset and Insights.\" In proceedings of the 27th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM, 2019.", "### Content\nThis dataset has been collected in real-time, using multiple Traffic APIs. Currently, it contains accident data that are collected from February 2016 to Dec 2021 for the Contiguous United States. Check here to learn more about this dataset.", "### Inspiration\nUS-Accidents can be used for numerous applications such as real-time car accident prediction, studying car accidents hotspot locations, casualty analysis and extracting cause and effect rules to predict car accidents, and studying the impact of precipitation or other environmental stimuli on accident occurrence. The most recent release of the dataset can also be useful to study the impact of COVID-19 on traffic behavior and accidents.", "### Usage Policy and Legal Disclaimer\nThis dataset is being distributed only for __Research__ purposes, under Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0). By clicking on download button(s) below, you are agreeing to use this data only for non-commercial, research, or academic applications. You may need to cite the above papers if you use this dataset.", "### Inquiries or need help?\nFor any inquiries, contact me at moosavi.3@URL", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @sobhanmoosavi", "### Licensing Information\n\nThe license for this dataset is cc-by-nc-sa-4.0", "### Contributions" ]
95ec1d31cef548b24b6071771ed2a2d317fd7717
# OneStopEnglish OneStopEnglish is a corpus of texts written at three reading levels, and demonstrates its usefulness for through two applications - automatic readability assessment and automatic text simplification. This dataset is a version of [onestop_english](https://huggingface.co/datasets/onestop_english), which was randomly split into (64*3=) 192 train examples, and 375 test examples (stratified).
SetFit/onestop_english
[ "license:cc-by-sa-4.0", "region:us" ]
2022-09-08T05:12:18+00:00
{"license": "cc-by-sa-4.0"}
2022-09-08T05:16:39+00:00
[]
[]
TAGS #license-cc-by-sa-4.0 #region-us
# OneStopEnglish OneStopEnglish is a corpus of texts written at three reading levels, and demonstrates its usefulness for through two applications - automatic readability assessment and automatic text simplification. This dataset is a version of onestop_english, which was randomly split into (64*3=) 192 train examples, and 375 test examples (stratified).
[ "# OneStopEnglish\nOneStopEnglish is a corpus of texts written at three reading levels, and demonstrates its usefulness for through two applications - automatic readability assessment and automatic text simplification.\n\nThis dataset is a version of onestop_english, which was randomly split into (64*3=) 192 train examples, and 375 test examples (stratified)." ]
[ "TAGS\n#license-cc-by-sa-4.0 #region-us \n", "# OneStopEnglish\nOneStopEnglish is a corpus of texts written at three reading levels, and demonstrates its usefulness for through two applications - automatic readability assessment and automatic text simplification.\n\nThis dataset is a version of onestop_english, which was randomly split into (64*3=) 192 train examples, and 375 test examples (stratified)." ]
8f4edc041879a2e0162401ee1754a7555b660c6a
# School Notebooks Dataset The images of school notebooks with handwritten notes in English. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. ## Annotation format The annotation is in COCO format. The `annotation.json` should have the following dictionaries: - `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes). - `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields: - `file_name` - name of the image file. - `id` for image id. - `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - `image_id` - the index of the image on which the polygon is located. - `category_id` - the polygon’s category index. - `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line. - `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
ai-forever/school_notebooks_EN
[ "task_categories:image-segmentation", "task_categories:object-detection", "source_datasets:original", "language:en", "license:mit", "optical-character-recognition", "text-detection", "ocr", "region:us" ]
2022-09-08T08:31:05+00:00
{"language": ["en"], "license": ["mit"], "source_datasets": ["original"], "task_categories": ["image-segmentation", "object-detection"], "task_ids": [], "tags": ["optical-character-recognition", "text-detection", "ocr"]}
2023-02-09T18:26:07+00:00
[]
[ "en" ]
TAGS #task_categories-image-segmentation #task_categories-object-detection #source_datasets-original #language-English #license-mit #optical-character-recognition #text-detection #ocr #region-us
# School Notebooks Dataset The images of school notebooks with handwritten notes in English. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. ## Annotation format The annotation is in COCO format. The 'URL' should have the following dictionaries: - 'annotation["categories"]' - a list of dicts with a categories info (categotiy names and indexes). - 'annotation["images"]' - a list of dictionaries with a description of images, each dictionary must contain fields: - 'file_name' - name of the image file. - 'id' for image id. - 'annotation["annotations"]' - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - 'image_id' - the index of the image on which the polygon is located. - 'category_id' - the polygon’s category index. - 'attributes' - dict with some additional annotation information. In the 'translation' subdict you can find text translation for the line. - 'segmentation' - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
[ "# School Notebooks Dataset\n\nThe images of school notebooks with handwritten notes in English.\n\nThe dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.", "## Annotation format\n\nThe annotation is in COCO format. The 'URL' should have the following dictionaries:\n\n- 'annotation[\"categories\"]' - a list of dicts with a categories info (categotiy names and indexes).\n- 'annotation[\"images\"]' - a list of dictionaries with a description of images, each dictionary must contain fields:\n - 'file_name' - name of the image file.\n - 'id' for image id.\n- 'annotation[\"annotations\"]' - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:\n - 'image_id' - the index of the image on which the polygon is located.\n - 'category_id' - the polygon’s category index.\n - 'attributes' - dict with some additional annotation information. In the 'translation' subdict you can find text translation for the line.\n - 'segmentation' - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y." ]
[ "TAGS\n#task_categories-image-segmentation #task_categories-object-detection #source_datasets-original #language-English #license-mit #optical-character-recognition #text-detection #ocr #region-us \n", "# School Notebooks Dataset\n\nThe images of school notebooks with handwritten notes in English.\n\nThe dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.", "## Annotation format\n\nThe annotation is in COCO format. The 'URL' should have the following dictionaries:\n\n- 'annotation[\"categories\"]' - a list of dicts with a categories info (categotiy names and indexes).\n- 'annotation[\"images\"]' - a list of dictionaries with a description of images, each dictionary must contain fields:\n - 'file_name' - name of the image file.\n - 'id' for image id.\n- 'annotation[\"annotations\"]' - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:\n - 'image_id' - the index of the image on which the polygon is located.\n - 'category_id' - the polygon’s category index.\n - 'attributes' - dict with some additional annotation information. In the 'translation' subdict you can find text translation for the line.\n - 'segmentation' - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y." ]
360875ac83db1a044fa95d969013eda19d8c2667
Bynny dataset
Anastasia1812/bunny
[ "region:us" ]
2022-09-08T08:41:27+00:00
{}
2022-09-08T08:56:50+00:00
[]
[]
TAGS #region-us
Bynny dataset
[]
[ "TAGS\n#region-us \n" ]
a10cd26104f054dc116a9dbc4a29c34b494eb9ae
# School Notebooks Dataset The images of school notebooks with handwritten notes in Russian. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. ## Annotation format The annotation is in COCO format. The `annotation.json` should have the following dictionaries: - `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes). - `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields: - `file_name` - name of the image file. - `id` for image id. - `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - `image_id` - the index of the image on which the polygon is located. - `category_id` - the polygon’s category index. - `attributes` - dict with some additional annotation information. In the `translation` subdict you can find text translation for the line. - `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
ai-forever/school_notebooks_RU
[ "task_categories:image-segmentation", "task_categories:object-detection", "source_datasets:original", "language:ru", "license:mit", "optical-character-recognition", "text-detection", "ocr", "region:us" ]
2022-09-08T09:06:32+00:00
{"language": ["ru"], "license": ["mit"], "source_datasets": ["original"], "task_categories": ["image-segmentation", "object-detection"], "task_ids": [], "tags": ["optical-character-recognition", "text-detection", "ocr"]}
2023-02-09T18:27:24+00:00
[]
[ "ru" ]
TAGS #task_categories-image-segmentation #task_categories-object-detection #source_datasets-original #language-Russian #license-mit #optical-character-recognition #text-detection #ocr #region-us
# School Notebooks Dataset The images of school notebooks with handwritten notes in Russian. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. ## Annotation format The annotation is in COCO format. The 'URL' should have the following dictionaries: - 'annotation["categories"]' - a list of dicts with a categories info (categotiy names and indexes). - 'annotation["images"]' - a list of dictionaries with a description of images, each dictionary must contain fields: - 'file_name' - name of the image file. - 'id' for image id. - 'annotation["annotations"]' - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - 'image_id' - the index of the image on which the polygon is located. - 'category_id' - the polygon’s category index. - 'attributes' - dict with some additional annotation information. In the 'translation' subdict you can find text translation for the line. - 'segmentation' - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.
[ "# School Notebooks Dataset\n\nThe images of school notebooks with handwritten notes in Russian.\n\nThe dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.", "## Annotation format\n\nThe annotation is in COCO format. The 'URL' should have the following dictionaries:\n\n- 'annotation[\"categories\"]' - a list of dicts with a categories info (categotiy names and indexes).\n- 'annotation[\"images\"]' - a list of dictionaries with a description of images, each dictionary must contain fields:\n - 'file_name' - name of the image file.\n - 'id' for image id.\n- 'annotation[\"annotations\"]' - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:\n - 'image_id' - the index of the image on which the polygon is located.\n - 'category_id' - the polygon’s category index.\n - 'attributes' - dict with some additional annotation information. In the 'translation' subdict you can find text translation for the line.\n - 'segmentation' - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y." ]
[ "TAGS\n#task_categories-image-segmentation #task_categories-object-detection #source_datasets-original #language-Russian #license-mit #optical-character-recognition #text-detection #ocr #region-us \n", "# School Notebooks Dataset\n\nThe images of school notebooks with handwritten notes in Russian.\n\nThe dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.", "## Annotation format\n\nThe annotation is in COCO format. The 'URL' should have the following dictionaries:\n\n- 'annotation[\"categories\"]' - a list of dicts with a categories info (categotiy names and indexes).\n- 'annotation[\"images\"]' - a list of dictionaries with a description of images, each dictionary must contain fields:\n - 'file_name' - name of the image file.\n - 'id' for image id.\n- 'annotation[\"annotations\"]' - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:\n - 'image_id' - the index of the image on which the polygon is located.\n - 'category_id' - the polygon’s category index.\n - 'attributes' - dict with some additional annotation information. In the 'translation' subdict you can find text translation for the line.\n - 'segmentation' - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y." ]
c7186656e42f3b8660bf4a0e7768d54bb8d9429d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: lewtun/sagemaker-distilbert-emotion-1 * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-emotion-default-39ecfd-16096203
[ "autotrain", "evaluation", "region:us" ]
2022-09-08T09:09:45+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "lewtun/sagemaker-distilbert-emotion-1", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-09-08T09:10:12+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: lewtun/sagemaker-distilbert-emotion-1 * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: lewtun/sagemaker-distilbert-emotion-1\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: lewtun/sagemaker-distilbert-emotion-1\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
6d5678654a99a8fd5150bf7523ced793e92a0be6
# Dataset Card for the-reddit-climate-change-dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) ## Dataset Description - **Homepage:** [https://socialgrep.com/datasets](https://socialgrep.com/datasets/the-reddit-climate-change-dataset?utm_source=huggingface&utm_medium=link&utm_campaign=theredditclimatechangedataset) - **Reddit downloader used:** [https://socialgrep.com/exports](https://socialgrep.com/exports?utm_source=huggingface&utm_medium=link&utm_campaign=theredditclimatechangedataset) - **Point of Contact:** [Website](https://socialgrep.com/contact?utm_source=huggingface&utm_medium=link&utm_campaign=theredditclimatechangedataset) ### Dataset Summary All the mentions of climate change on Reddit before Sep 1 2022. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'subreddit.id': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'subreddit.name': the human-readable name of the data point's host subreddit. - 'subreddit.nsfw': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Additional Information ### Licensing Information CC-BY v4.0
SocialGrep/the-reddit-climate-change-dataset
[ "annotations_creators:lexyr", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-09-08T17:24:14+00:00
{"annotations_creators": ["lexyr"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"]}
2022-09-08T17:24:20+00:00
[]
[ "en" ]
TAGS #annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for the-reddit-climate-change-dataset ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Licensing Information ## Dataset Description - Homepage: URL - Reddit downloader used: URL - Point of Contact: Website ### Dataset Summary All the mentions of climate change on Reddit before Sep 1 2022. ### Languages Mainly English. ## Dataset Structure ### Data Instances A data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared. ### Data Fields - 'type': the type of the data point. Can be 'post' or 'comment'. - 'id': the base-36 Reddit ID of the data point. Unique when combined with type. - 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique. - 'URL': the human-readable name of the data point's host subreddit. - 'URL': a boolean marking the data point's host subreddit as NSFW or not. - 'created_utc': a UTC timestamp for the data point. - 'permalink': a reference link to the data point on Reddit. - 'score': score of the data point on Reddit. - 'domain': (Post only) the domain of the data point's link. - 'url': (Post only) the destination of the data point's link, if any. - 'selftext': (Post only) the self-text of the data point, if any. - 'title': (Post only) the title of the post data point. - 'body': (Comment only) the body of the comment data point. - 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis. ## Additional Information ### Licensing Information CC-BY v4.0
[ "# Dataset Card for the-reddit-climate-change-dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information", "## Dataset Description\n\n- Homepage: URL\n- Reddit downloader used: URL\n- Point of Contact: Website", "### Dataset Summary\n\nAll the mentions of climate change on Reddit before Sep 1 2022.", "### Languages\n\nMainly English.", "## Dataset Structure", "### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.", "### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.", "## Additional Information", "### Licensing Information\n\nCC-BY v4.0" ]
[ "TAGS\n#annotations_creators-lexyr #language_creators-crowdsourced #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for the-reddit-climate-change-dataset", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information", "## Dataset Description\n\n- Homepage: URL\n- Reddit downloader used: URL\n- Point of Contact: Website", "### Dataset Summary\n\nAll the mentions of climate change on Reddit before Sep 1 2022.", "### Languages\n\nMainly English.", "## Dataset Structure", "### Data Instances\n\nA data point is a post or a comment. Due to the separate nature of the two, those exist in two different files - even though many fields are shared.", "### Data Fields\n\n- 'type': the type of the data point. Can be 'post' or 'comment'.\n- 'id': the base-36 Reddit ID of the data point. Unique when combined with type.\n- 'URL': the base-36 Reddit ID of the data point's host subreddit. Unique.\n- 'URL': the human-readable name of the data point's host subreddit.\n- 'URL': a boolean marking the data point's host subreddit as NSFW or not.\n- 'created_utc': a UTC timestamp for the data point.\n- 'permalink': a reference link to the data point on Reddit.\n- 'score': score of the data point on Reddit.\n\n- 'domain': (Post only) the domain of the data point's link.\n- 'url': (Post only) the destination of the data point's link, if any.\n- 'selftext': (Post only) the self-text of the data point, if any.\n- 'title': (Post only) the title of the post data point.\n\n- 'body': (Comment only) the body of the comment data point.\n- 'sentiment': (Comment only) the result of an in-house sentiment analysis pipeline. Used for exploratory analysis.", "## Additional Information", "### Licensing Information\n\nCC-BY v4.0" ]
17f24d0e1728d03561905934d6ba0368431d4e42
# Dataset Card for Airbnb Stock Price ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/evangower/airbnb-stock-price - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@evangower](https://kaggle.com/evangower) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/airbnb-stock-price-new-new
[ "license:cc0-1.0", "region:us" ]
2022-09-08T17:48:04+00:00
{"license": ["cc0-1.0"], "kaggle_id": "evangower/airbnb-stock-price"}
2022-09-08T17:48:08+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# Dataset Card for Airbnb Stock Price ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @evangower ### Licensing Information The license for this dataset is cc0-1.0 ### Contributions
[ "# Dataset Card for Airbnb Stock Price", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @evangower", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# Dataset Card for Airbnb Stock Price", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @evangower", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
2e1bafd99ce03bfe95c2473ecc422bde8dd74ef2
# Dataset Card for Airbnb Stock Price ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/evangower/airbnb-stock-price - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@evangower](https://kaggle.com/evangower) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/airbnb-stock-price-new-new-new
[ "license:cc0-1.0", "region:us" ]
2022-09-08T17:52:57+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "evangower/airbnb-stock-price"}
2022-09-08T17:53:00+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# Dataset Card for Airbnb Stock Price ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @evangower ### Licensing Information The license for this dataset is cc0-1.0 ### Contributions
[ "# Dataset Card for Airbnb Stock Price", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @evangower", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# Dataset Card for Airbnb Stock Price", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis contains the historical stock price of Airbnb (ticker symbol ABNB) an American company that operates an online marketplace for lodging, primarily homestays for vacation rentals, and tourism activities. Based in San Francisco, California, the platform is accessible via website and mobile app.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @evangower", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
c5c7d736a46f8e0b84448d4a4d7b722f257eaea9
# Dataset Card for Electrical half hourly raw and cleaned datasets for Great Britain from 2008-11-05 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://zenodo.org/record/6606485 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary <p><strong>A journal paper published in Energy Strategy Reviews details the method to create the data.</strong></p> <p><strong>https://www.sciencedirect.com/science/article/pii/S2211467X21001280</strong></p> <p>&nbsp;</p> <p>2021-09-09: Version 6.0.0 was created. Now includes data for the North Sea Link (NSL) interconnector from Great Britain to Norway (https://www.northsealink.com). The previous version (5.0.4) should not be used - as there was an error with interconnector data having a static value over the summer 2021.</p> <p>&nbsp;</p> <p>2021-05-05: Version 5.0.0 was created. Datetimes now in ISO 8601 format (with capital letter &#39;T&#39; between the date and time) rather than previously with a space (to RFC 3339 format) and with an offset to identify both UTC and localtime. MW values now all saved as integers rather than floats. Elexon data as always from www.elexonportal.co.uk/fuelhh, National Grid data from&nbsp;https://data.nationalgrideso.com/demand/historic-demand-data &nbsp; Raw data now added again for comparison of pre and post cleaning - to allow for training of additional cleaning methods. If using Microsoft Excel, the T between the date and time can be removed using the =SUBSTITUTE() command - and substitute &quot;T&quot; for a space &quot; &quot;</p> <p>_____________________________________________________________________________________________________</p> <p>2021-03-02: Version 4.0.0 was created. Due to a new interconnecter (IFA2 -&nbsp;https://en.wikipedia.org/wiki/IFA-2) being commissioned in Q1 2021, there is an additional column with data from National Grid - this is called &#39;POWER_NGEM_IFA2_FLOW_MW&#39; in the espeni dataset. In addition, National Grid has dropped&nbsp;the column name &#39;FRENCH_FLOW&#39; that used to provide&nbsp;the value for the column&nbsp;&#39;POWER_NGEM_FRENCH_FLOW_MW&#39; in previous espeni versions. However, this has been changed to &#39;IFA_FLOW&#39; in National Grid&#39;s original data, which is now called &#39;POWER_NGEM_IFA_FLOW_MW&#39; in the espeni dataset. Lastly, the IO14 columns have all been dropped by National Grid - and potentially unlikely to appear again in future.</p> <p>2020-12-02: Version 3.0.0 was created. There was a problem with earlier versions&nbsp;local time format - where the +01:00 value was not carried through into the data properly. Now addressed - therefore - local time now has the format e.g.&nbsp;2020-03-31 20:00:00+01:00 when in British Summer Time.</p> <p>2020-10-03: Version 2.0.0 was created as it looks like National Grid has&nbsp;had a significant change&nbsp;to the methodology underpinning the embedded wind calculations. The wind profile seems similar to previous values, but with an increasing value in comparison&nbsp;to the value published in earlier&nbsp;the greater the embedded value is. The &#39;new&#39; values are from&nbsp;https://data.nationalgrideso.com/demand/daily-demand-update from 2013.</p> <p>Previously: raw and cleaned datasets for Great Britain&#39;s&nbsp;publicly available electrical data from&nbsp;Elexon (www.elexonportal.co.uk) and National Grid (https://demandforecast.nationalgrid.com/efs_demand_forecast/faces/DataExplorer). Updated versions with more recent data will be uploaded with a differing&nbsp;version number and doi</p> <p>All data is released in accordance with Elexon&#39;s disclaimer and reservation of rights.</p> <p>https://www.elexon.co.uk/using-this-website/disclaimer-and-reservation-of-rights/</p> <p>This disclaimer is also felt to cover&nbsp;the data from National Grid, and the parsed data from the Energy Informatics Group at the University of Birmingham.</p> ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The class labels in the dataset are in English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by Grant Wilson, Noah Godfrey ### Licensing Information The license for this dataset is https://creativecommons.org/licenses/by-nc/4.0/legalcode ### Citation Information ```bibtex @dataset{grant_wilson_2022_6606485, author = {Grant Wilson and Noah Godfrey}, title = {{Electrical half hourly raw and cleaned datasets for Great Britain from 2008-11-05}}, month = jun, year = 2022, note = {{Grant funding as part of Research Councils (UK) EP/L024756/1 - UK Energy Research Centre research programme Phase 3 Grant funding as part of Research Councils (UK) EP/V012053/1 - The Active Building Centre Research Programme (ABC RP)}}, publisher = {Zenodo}, version = {6.0.9}, doi = {10.5281/zenodo.6606485}, url = {https://doi.org/10.5281/zenodo.6606485} } ``` ### Contributions [More Information Needed]
nateraw/espeni-3
[ "license:unknown", "region:us" ]
2022-09-08T17:58:36+00:00
{"license": ["unknown"], "zenodo_id": "6606485", "converted_from": "zenodo"}
2022-09-08T17:58:52+00:00
[]
[]
TAGS #license-unknown #region-us
# Dataset Card for Electrical half hourly raw and cleaned datasets for Great Britain from 2008-11-05 ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary <p><strong>A journal paper published in Energy Strategy Reviews details the method to create the data.</strong></p> <p><strong>URL <p>&nbsp;</p> <p>2021-09-09: Version 6.0.0 was created. Now includes data for the North Sea Link (NSL) interconnector from Great Britain to Norway (URL). The previous version (5.0.4) should not be used - as there was an error with interconnector data having a static value over the summer 2021.</p> <p>&nbsp;</p> <p>2021-05-05: Version 5.0.0 was created. Datetimes now in ISO 8601 format (with capital letter &#39;T&#39; between the date and time) rather than previously with a space (to RFC 3339 format) and with an offset to identify both UTC and localtime. MW values now all saved as integers rather than floats. Elexon data as always from URL National Grid data from&nbsp;URL &nbsp; Raw data now added again for comparison of pre and post cleaning - to allow for training of additional cleaning methods. If using Microsoft Excel, the T between the date and time can be removed using the =SUBSTITUTE() command - and substitute &quot;T&quot; for a space &quot; &quot;</p> <p>_____________________________________________________________________________________________________</p> <p>2021-03-02: Version 4.0.0 was created. Due to a new interconnecter (IFA2 -&nbsp;URL being commissioned in Q1 2021, there is an additional column with data from National Grid - this is called &#39;POWER_NGEM_IFA2_FLOW_MW&#39; in the espeni dataset. In addition, National Grid has dropped&nbsp;the column name &#39;FRENCH_FLOW&#39; that used to provide&nbsp;the value for the column&nbsp;&#39;POWER_NGEM_FRENCH_FLOW_MW&#39; in previous espeni versions. However, this has been changed to &#39;IFA_FLOW&#39; in National Grid&#39;s original data, which is now called &#39;POWER_NGEM_IFA_FLOW_MW&#39; in the espeni dataset. Lastly, the IO14 columns have all been dropped by National Grid - and potentially unlikely to appear again in future.</p> <p>2020-12-02: Version 3.0.0 was created. There was a problem with earlier versions&nbsp;local time format - where the +01:00 value was not carried through into the data properly. Now addressed - therefore - local time now has the format e.g.&nbsp;2020-03-31 20:00:00+01:00 when in British Summer Time.</p> <p>2020-10-03: Version 2.0.0 was created as it looks like National Grid has&nbsp;had a significant change&nbsp;to the methodology underpinning the embedded wind calculations. The wind profile seems similar to previous values, but with an increasing value in comparison&nbsp;to the value published in earlier&nbsp;the greater the embedded value is. The &#39;new&#39; values are from&nbsp;URL from 2013.</p> <p>Previously: raw and cleaned datasets for Great Britain&#39;s&nbsp;publicly available electrical data from&nbsp;Elexon (URL) and National Grid (URL Updated versions with more recent data will be uploaded with a differing&nbsp;version number and doi</p> <p>All data is released in accordance with Elexon&#39;s disclaimer and reservation of rights.</p> <p>URL <p>This disclaimer is also felt to cover&nbsp;the data from National Grid, and the parsed data from the Energy Informatics Group at the University of Birmingham.</p> ### Supported Tasks and Leaderboards ### Languages The class labels in the dataset are in English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by Grant Wilson, Noah Godfrey ### Licensing Information The license for this dataset is URL ### Contributions
[ "# Dataset Card for Electrical half hourly raw and cleaned datasets for Great Britain from 2008-11-05", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n<p><strong>A journal paper published in Energy Strategy Reviews details the method to create the data.</strong></p>\n\n<p><strong>URL\n\n<p>&nbsp;</p>\n\n<p>2021-09-09: Version 6.0.0 was created. Now includes data for the North Sea Link (NSL) interconnector from Great Britain to Norway (URL). The previous version (5.0.4) should not be used - as there was an error with interconnector data having a static value over the summer 2021.</p>\n\n<p>&nbsp;</p>\n\n<p>2021-05-05: Version 5.0.0 was created. Datetimes now in ISO 8601 format (with capital letter &#39;T&#39; between the date and time) rather than previously with a space (to RFC 3339 format) and with an offset to identify both UTC and localtime. MW values now all saved as integers rather than floats. Elexon data as always from URL National Grid data from&nbsp;URL &nbsp; Raw data now added again for comparison of pre and post cleaning - to allow for training of additional cleaning methods. If using Microsoft Excel, the T between the date and time can be removed using the =SUBSTITUTE() command - and substitute &quot;T&quot; for a space &quot; &quot;</p>\n\n<p>_____________________________________________________________________________________________________</p>\n\n<p>2021-03-02: Version 4.0.0 was created. Due to a new interconnecter (IFA2 -&nbsp;URL being commissioned in Q1 2021, there is an additional column with data from National Grid - this is called &#39;POWER_NGEM_IFA2_FLOW_MW&#39; in the espeni dataset. In addition, National Grid has dropped&nbsp;the column name &#39;FRENCH_FLOW&#39; that used to provide&nbsp;the value for the column&nbsp;&#39;POWER_NGEM_FRENCH_FLOW_MW&#39; in previous espeni versions. However, this has been changed to &#39;IFA_FLOW&#39; in National Grid&#39;s original data, which is now called &#39;POWER_NGEM_IFA_FLOW_MW&#39; in the espeni dataset. Lastly, the IO14 columns have all been dropped by National Grid - and potentially unlikely to appear again in future.</p>\n\n<p>2020-12-02: Version 3.0.0 was created. There was a problem with earlier versions&nbsp;local time format - where the +01:00 value was not carried through into the data properly. Now addressed - therefore - local time now has the format e.g.&nbsp;2020-03-31 20:00:00+01:00 when in British Summer Time.</p>\n\n<p>2020-10-03: Version 2.0.0 was created as it looks like National Grid has&nbsp;had a significant change&nbsp;to the methodology underpinning the embedded wind calculations. The wind profile seems similar to previous values, but with an increasing value in comparison&nbsp;to the value published in earlier&nbsp;the greater the embedded value is. The &#39;new&#39; values are from&nbsp;URL from 2013.</p>\n\n<p>Previously: raw and cleaned datasets for Great Britain&#39;s&nbsp;publicly available electrical data from&nbsp;Elexon (URL) and National Grid (URL Updated versions with more recent data will be uploaded with a differing&nbsp;version number and doi</p>\n\n<p>All data is released in accordance with Elexon&#39;s disclaimer and reservation of rights.</p>\n\n<p>URL\n\n<p>This disclaimer is also felt to cover&nbsp;the data from National Grid, and the parsed data from the Energy Informatics Group at the University of Birmingham.</p>", "### Supported Tasks and Leaderboards", "### Languages\n\nThe class labels in the dataset are in English", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by Grant Wilson, Noah Godfrey", "### Licensing Information\n\nThe license for this dataset is URL", "### Contributions" ]
[ "TAGS\n#license-unknown #region-us \n", "# Dataset Card for Electrical half hourly raw and cleaned datasets for Great Britain from 2008-11-05", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\n<p><strong>A journal paper published in Energy Strategy Reviews details the method to create the data.</strong></p>\n\n<p><strong>URL\n\n<p>&nbsp;</p>\n\n<p>2021-09-09: Version 6.0.0 was created. Now includes data for the North Sea Link (NSL) interconnector from Great Britain to Norway (URL). The previous version (5.0.4) should not be used - as there was an error with interconnector data having a static value over the summer 2021.</p>\n\n<p>&nbsp;</p>\n\n<p>2021-05-05: Version 5.0.0 was created. Datetimes now in ISO 8601 format (with capital letter &#39;T&#39; between the date and time) rather than previously with a space (to RFC 3339 format) and with an offset to identify both UTC and localtime. MW values now all saved as integers rather than floats. Elexon data as always from URL National Grid data from&nbsp;URL &nbsp; Raw data now added again for comparison of pre and post cleaning - to allow for training of additional cleaning methods. If using Microsoft Excel, the T between the date and time can be removed using the =SUBSTITUTE() command - and substitute &quot;T&quot; for a space &quot; &quot;</p>\n\n<p>_____________________________________________________________________________________________________</p>\n\n<p>2021-03-02: Version 4.0.0 was created. Due to a new interconnecter (IFA2 -&nbsp;URL being commissioned in Q1 2021, there is an additional column with data from National Grid - this is called &#39;POWER_NGEM_IFA2_FLOW_MW&#39; in the espeni dataset. In addition, National Grid has dropped&nbsp;the column name &#39;FRENCH_FLOW&#39; that used to provide&nbsp;the value for the column&nbsp;&#39;POWER_NGEM_FRENCH_FLOW_MW&#39; in previous espeni versions. However, this has been changed to &#39;IFA_FLOW&#39; in National Grid&#39;s original data, which is now called &#39;POWER_NGEM_IFA_FLOW_MW&#39; in the espeni dataset. Lastly, the IO14 columns have all been dropped by National Grid - and potentially unlikely to appear again in future.</p>\n\n<p>2020-12-02: Version 3.0.0 was created. There was a problem with earlier versions&nbsp;local time format - where the +01:00 value was not carried through into the data properly. Now addressed - therefore - local time now has the format e.g.&nbsp;2020-03-31 20:00:00+01:00 when in British Summer Time.</p>\n\n<p>2020-10-03: Version 2.0.0 was created as it looks like National Grid has&nbsp;had a significant change&nbsp;to the methodology underpinning the embedded wind calculations. The wind profile seems similar to previous values, but with an increasing value in comparison&nbsp;to the value published in earlier&nbsp;the greater the embedded value is. The &#39;new&#39; values are from&nbsp;URL from 2013.</p>\n\n<p>Previously: raw and cleaned datasets for Great Britain&#39;s&nbsp;publicly available electrical data from&nbsp;Elexon (URL) and National Grid (URL Updated versions with more recent data will be uploaded with a differing&nbsp;version number and doi</p>\n\n<p>All data is released in accordance with Elexon&#39;s disclaimer and reservation of rights.</p>\n\n<p>URL\n\n<p>This disclaimer is also felt to cover&nbsp;the data from National Grid, and the parsed data from the Energy Informatics Group at the University of Birmingham.</p>", "### Supported Tasks and Leaderboards", "### Languages\n\nThe class labels in the dataset are in English", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by Grant Wilson, Noah Godfrey", "### Licensing Information\n\nThe license for this dataset is URL", "### Contributions" ]
f9846ec84537f7986056d138e0219648639dcdb8
annotations_creators: [] language: [] language_creators: - other license: - afl-3.0 multilinguality: [] pretty_name: bunny images size_categories: - unknown source_datasets: - original tags: [] task_categories: - text-to-image task_ids: []
Anastasia1812/bunnies
[ "region:us" ]
2022-09-08T18:23:33+00:00
{}
2022-09-08T18:31:08+00:00
[]
[]
TAGS #region-us
annotations_creators: [] language: [] language_creators: - other license: - afl-3.0 multilinguality: [] pretty_name: bunny images size_categories: - unknown source_datasets: - original tags: [] task_categories: - text-to-image task_ids: []
[]
[ "TAGS\n#region-us \n" ]
d0955128fa4c42ef9dd97fd022294a4474cf290e
# Dataset Card for Avocado Prices ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/neuromusic/avocado-prices - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Context It is a well known fact that Millenials LOVE Avocado Toast. It's also a well known fact that all Millenials live in their parents basements. Clearly, they aren't buying home because they are buying too much Avocado Toast! But maybe there's hope... if a Millenial could find a city with cheap avocados, they could live out the Millenial American Dream. ### Content This data was downloaded from the Hass Avocado Board website in May of 2018 & compiled into a single CSV. Here's how the [Hass Avocado Board describes the data on their website][1]: &gt; The table below represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi-outlet retail data set. Multi-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table. Some relevant columns in the dataset: - `Date` - The date of the observation - `AveragePrice` - the average price of a single avocado - `type` - conventional or organic - `year` - the year - `Region` - the city or region of the observation - `Total Volume` - Total number of avocados sold - `4046` - Total number of avocados with PLU 4046 sold - `4225` - Total number of avocados with PLU 4225 sold - `4770` - Total number of avocados with PLU 4770 sold ### Acknowledgements Many thanks to the Hass Avocado Board for sharing this data!! http://www.hassavocadoboard.com/retail/volume-and-price-data ### Inspiration In which cities can millenials have their avocado toast AND buy a home? Was the Avocadopocalypse of 2017 real? [1]: http://www.hassavocadoboard.com/retail/volume-and-price-data ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@neuromusic](https://kaggle.com/neuromusic) ### Licensing Information The license for this dataset is odbl ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/avocado-prices
[ "license:odbl", "region:us" ]
2022-09-08T19:35:54+00:00
{"license": ["odbl"], "converted_from": "kaggle", "kaggle_id": "neuromusic/avocado-prices"}
2022-09-08T19:43:27+00:00
[]
[]
TAGS #license-odbl #region-us
# Dataset Card for Avocado Prices ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Context It is a well known fact that Millenials LOVE Avocado Toast. It's also a well known fact that all Millenials live in their parents basements. Clearly, they aren't buying home because they are buying too much Avocado Toast! But maybe there's hope... if a Millenial could find a city with cheap avocados, they could live out the Millenial American Dream. ### Content This data was downloaded from the Hass Avocado Board website in May of 2018 & compiled into a single CSV. Here's how the [Hass Avocado Board describes the data on their website][1]: &gt; The table below represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi-outlet retail data set. Multi-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table. Some relevant columns in the dataset: - 'Date' - The date of the observation - 'AveragePrice' - the average price of a single avocado - 'type' - conventional or organic - 'year' - the year - 'Region' - the city or region of the observation - 'Total Volume' - Total number of avocados sold - '4046' - Total number of avocados with PLU 4046 sold - '4225' - Total number of avocados with PLU 4225 sold - '4770' - Total number of avocados with PLU 4770 sold ### Acknowledgements Many thanks to the Hass Avocado Board for sharing this data!! URL ### Inspiration In which cities can millenials have their avocado toast AND buy a home? Was the Avocadopocalypse of 2017 real? [1]: URL ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @neuromusic ### Licensing Information The license for this dataset is odbl ### Contributions
[ "# Dataset Card for Avocado Prices", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Context\n\nIt is a well known fact that Millenials LOVE Avocado Toast. It's also a well known fact that all Millenials live in their parents basements.\n\nClearly, they aren't buying home because they are buying too much Avocado Toast!\n\nBut maybe there's hope... if a Millenial could find a city with cheap avocados, they could live out the Millenial American Dream.", "### Content\n\nThis data was downloaded from the Hass Avocado Board website in May of 2018 & compiled into a single CSV. Here's how the [Hass Avocado Board describes the data on their website][1]:\n\n&gt; The table below represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi-outlet retail data set. Multi-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table.\n\nSome relevant columns in the dataset:\n\n- 'Date' - The date of the observation\n- 'AveragePrice' - the average price of a single avocado\n- 'type' - conventional or organic\n- 'year' - the year\n- 'Region' - the city or region of the observation\n- 'Total Volume' - Total number of avocados sold\n- '4046' - Total number of avocados with PLU 4046 sold\n- '4225' - Total number of avocados with PLU 4225 sold\n- '4770' - Total number of avocados with PLU 4770 sold", "### Acknowledgements\n\nMany thanks to the Hass Avocado Board for sharing this data!!\n\nURL", "### Inspiration\n\nIn which cities can millenials have their avocado toast AND buy a home?\n\nWas the Avocadopocalypse of 2017 real?\n\n\n [1]: URL", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @neuromusic", "### Licensing Information\n\nThe license for this dataset is odbl", "### Contributions" ]
[ "TAGS\n#license-odbl #region-us \n", "# Dataset Card for Avocado Prices", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Context\n\nIt is a well known fact that Millenials LOVE Avocado Toast. It's also a well known fact that all Millenials live in their parents basements.\n\nClearly, they aren't buying home because they are buying too much Avocado Toast!\n\nBut maybe there's hope... if a Millenial could find a city with cheap avocados, they could live out the Millenial American Dream.", "### Content\n\nThis data was downloaded from the Hass Avocado Board website in May of 2018 & compiled into a single CSV. Here's how the [Hass Avocado Board describes the data on their website][1]:\n\n&gt; The table below represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi-outlet retail data set. Multi-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table.\n\nSome relevant columns in the dataset:\n\n- 'Date' - The date of the observation\n- 'AveragePrice' - the average price of a single avocado\n- 'type' - conventional or organic\n- 'year' - the year\n- 'Region' - the city or region of the observation\n- 'Total Volume' - Total number of avocados sold\n- '4046' - Total number of avocados with PLU 4046 sold\n- '4225' - Total number of avocados with PLU 4225 sold\n- '4770' - Total number of avocados with PLU 4770 sold", "### Acknowledgements\n\nMany thanks to the Hass Avocado Board for sharing this data!!\n\nURL", "### Inspiration\n\nIn which cities can millenials have their avocado toast AND buy a home?\n\nWas the Avocadopocalypse of 2017 real?\n\n\n [1]: URL", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @neuromusic", "### Licensing Information\n\nThe license for this dataset is odbl", "### Contributions" ]
9ee569ca22bab4e5b7addf77abb150463c4030c1
# Dataset Card for Midjourney User Prompts & Generated Images (250k) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/succinctlyai/midjourney-texttoimage - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary General Context === [Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney), where users interact with a [Midjourney bot](https://midjourney.gitbook.io/docs/#create-your-first-image). When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images. This dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below). Midjourney's Discord Server --- Here is what the interaction with the Midjourney bot looks like on Discord: 1. Issuing an initial prompt: ![Screenshot showing how to issue an initial prompt](https://drive.google.com/uc?export=view&id=1k6BuaJNWThCr1x2Ezojx3fAmDIyeZhbp "Result of issuing an initial prompt") 2. Upscaling the bottom-left image: ![Screenshot showing how to request upscaling an image](https://drive.google.com/uc?export=view&id=15Y65Fe0eVKVPK5YOul0ZndLuqo4Lg4xk "Result of upscaling an image") 3. Requesting variations of the bottom-left image: ![Screenshot showing how to request a variation of a generated image](https://drive.google.com/uc?export=view&id=1-9kw69PgM5eIM5n1dir4lQqGCn_hJfOA "Result of requesting a variation of an image") Dataset Format === The dataset was produced by scraping ten public Discord channels in the "general" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern `channel-name_yyyy_mm_dd.json`. The `"messages"` field in each JSON file contains a list of [Message](https://discord.com/developers/docs/resources/channel#message-object) objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) with utilities for extracting such information. | User Prompt | Generated Image URL | | --- | --- | | anatomical heart fill with deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989673529102463016/f14d5cb4-aa4d-4060-b017-5ee6c1db42d6_Ko_anatomical_heart_fill_with_deers_neon_pastel_artstation.png | | anatomical heart fill with jumping running deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989675045439815721/1d7541f2-b659-4a74-86a3-ae211918723c_Ko_anatomical_heart_fill_with_jumping_running_deers_neon_pastel_artstation.png | | https://s.mj.run/UlkFmVAKfaE cat with many eyes floating in colorful glowing swirling whisps, occult inspired, emerging from the void, shallow depth of field | https://cdn.discordapp.com/attachments/982990243621908480/988957623229501470/6116dc5f-64bb-4afb-ba5f-95128645c247_MissTwistedRose_cat_with_many_eyes_floating_in_colorful_glowing_swirling_whisps_occult_inspired_emerging_from_the_vo.png | Dataset Stats === The dataset contains: - **268k** messages from 10 public Discord channel collected over 28 days. - **248k** user-generated prompts and their associated generated images, out of which: + 60% are requests for new images (initial or variation requests for a previously-generated image), and + 40% are requests for upscaling previously-generated images. Prompt Analysis === Here are the most prominent phrases among the user-generated text prompts: ![word cloud](https://drive.google.com/uc?export=view&id=1J432wrecf2zibDFU5sT3BXFxqmt3PJ-P) Prompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens: ![prompt lengths](https://drive.google.com/uc?export=view&id=1fFObFvcWwOEGJ3k47G4fzIHZXmxS3RiW) See the [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.). Sample Use Case === One way of leveraging this dataset is to help address the [prompt engineering](https://www.wired.com/story/dalle-art-curation-artificial-intelligence/) problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. [This notebook](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at [succinctly/midjourney-prompts](https://huggingface.co/datasets/succinctly/midjourney-prompts), and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator). Here is how our model can help brainstorm creative prompts and speed up prompt engineering: ![prompt autocomplete model](https://drive.google.com/uc?export=view&id=1JqZ-CaWNpQ4iO0Qcd3b8u_QnBp-Q0PKu) Authors === This project was a collaboration between [Iulia Turc](https://twitter.com/IuliaTurc) and [Gaurav Nemade](https://twitter.com/gaurav_nemade15). We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at [succinctly.ai](https://succinctly.ai). Interesting Finds === Here are some of the generated images that drew our attention: | User Prompt | Generated Image | | --- | --- | | https://s.mj.run/JlwNbH Historic Ensemble of the Potala Palace Lhasa, japanese style painting,trending on artstation, temple, architecture, fiction, sci-fi, underwater city, Atlantis , cyberpunk style, 8k revolution, Aokigahara fall background , dramatic lighting, epic, photorealistic, in his lowest existential moment with high detail, trending on artstation,cinematic light, volumetric shading ,high radiosity , high quality, form shadow, rim lights , concept art of architecture, 3D,hyper deatiled,very high quality,8k,Maxon cinema,visionary,imaginary,realistic,as trending on the imagination of Gustave Doré idea,perspective view,ornate light --w 1920 --h 1024 | ![palace](https://drive.google.com/uc?export=view&id=1xl2Gr1TSWCh0p_8o_wJnQIsO1qxW02Z_) | | a dark night with fog in a metropolis of tomorrow by hugh ferriss:, epic composition, maximum detail, Westworld, Elysium space station, space craft shuttle, star trek enterprise interior, moody, peaceful, hyper detailed, neon lighting, populated, minimalist design, monochromatic, rule of thirds, photorealistic, alien world, concept art, sci-fi, artstation, photorealistic, arch viz , volumetric light moody cinematic epic, 3d render, octane render, trending on artstation, in the style of dylan cole + syd mead + by zaha hadid, zaha hadid architecture + reaction-diffusion + poly-symmetric + parametric modelling, open plan, minimalist design 4k --ar 3:1 | ![metropolis](https://drive.google.com/uc?export=view&id=16A-VtlbSZCaUFiA6CZQzevPgBGyBiXWI) | | https://s.mj.run/qKj8n0 fantasy art, hyperdetailed, panoramic view, foreground is a crowd of ancient Aztec robots are doing street dance battle , main part is middleground is majestic elegant Gundam mecha robot design with black power armor and unsettling ancient Aztec plumes and decorations scary looking with two magical neon swords combat fighting::2 , background is at night with nebula eruption, Rembrandt lighting, global illumination, high details, hyper quality, unreal negine, octane render, arnold render, vray render, photorealistic, 8k --ar 3:1 --no dof,blur,bokeh | ![ancient](https://drive.google.com/uc?export=view&id=1a3jI3eiQwLbulaSS2-l1iGJ6-kokMMvc) | | https://s.mj.run/zMIhrKBDBww in side a Amethyst geode cave, 8K symmetrical portrait, trending in artstation, epic, fantasy, Klimt, Monet, clean brush stroke, realistic highly detailed, wide angle view, 8k post-processing highly detailed, moody lighting rendered by octane engine, artstation,cinematic lighting, intricate details, 8k detail post processing, --no face --w 512 --h 256 | ![cave](https://drive.google.com/uc?export=view&id=1gUx-3drfCBBFha8Hoal4Ly4efDXSrxlB) | | https://s.mj.run/GTuMoq whimsically designed gothic, interior of a baroque cathedral in fire with moths and birds flying, rain inside, with angels, beautiful woman dressed with lace victorian and plague mask, moody light, 8K photgraphy trending on shotdeck, cinema lighting, simon stålenhag, hyper realistic octane render, octane render, 4k post processing is very detailed, moody lighting, Maya+V-Ray +metal art+ extremely detailed, beautiful, unreal engine, lovecraft, Big Bang cosmology in LSD+IPAK,4K, beatiful art by Lêon François Comerre, ashley wood, craig mullins, ,outer space view, William-Adolphe Bouguereau, Rosetti --w 1040 --h 2080 | ![gothic](https://drive.google.com/uc?export=view&id=1nmsTEdPEbvDq9SLnyjjw3Pb8Eb-C1WaP) | ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@succinctlyai](https://kaggle.com/succinctlyai) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/midjourney-texttoimage
[ "license:cc0-1.0", "region:us" ]
2022-09-08T19:49:52+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "succinctlyai/midjourney-texttoimage"}
2022-09-08T20:14:37+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
Dataset Card for Midjourney User Prompts & Generated Images (250k) ================================================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary General Context =============== Midjourney is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public Discord server, where users interact with a Midjourney bot. When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images. This dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below). Midjourney's Discord Server --------------------------- Here is what the interaction with the Midjourney bot looks like on Discord: 1. Issuing an initial prompt: !Screenshot showing how to issue an initial prompt 2. Upscaling the bottom-left image: !Screenshot showing how to request upscaling an image 3. Requesting variations of the bottom-left image: !Screenshot showing how to request a variation of a generated image Dataset Format ============== The dataset was produced by scraping ten public Discord channels in the "general" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern 'channel-name\_yyyy\_mm\_dd.json'. The '"messages"' field in each JSON file contains a list of Message objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See the companion notebook with utilities for extracting such information. Dataset Stats ============= The dataset contains: * 268k messages from 10 public Discord channel collected over 28 days. * 248k user-generated prompts and their associated generated images, out of which: + 60% are requests for new images (initial or variation requests for a previously-generated image), and + 40% are requests for upscaling previously-generated images. Prompt Analysis =============== Here are the most prominent phrases among the user-generated text prompts: !word cloud Prompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens: !prompt lengths See the the companion notebook for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.). Sample Use Case =============== One way of leveraging this dataset is to help address the prompt engineering problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. This notebook shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at succinctly/midjourney-prompts, and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at succinctly/text2image-prompt-generator. Here is how our model can help brainstorm creative prompts and speed up prompt engineering: !prompt autocomplete model Authors ======= This project was a collaboration between Iulia Turc and Gaurav Nemade. We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at URL. Interesting Finds ================= Here are some of the generated images that drew our attention: ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances ### Data Fields ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators This dataset was shared by @succinctlyai ### Licensing Information The license for this dataset is cc0-1.0 ### Contributions
[ "### Dataset Summary\n\n\nGeneral Context\n===============\n\n\nMidjourney is an independent research lab whose broad mission is to \"explore new mediums of thought\". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public Discord server, where users interact with a Midjourney bot. When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images.\n\n\nThis dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below).\n\n\nMidjourney's Discord Server\n---------------------------\n\n\nHere is what the interaction with the Midjourney bot looks like on Discord:\n\n\n1. Issuing an initial prompt:\n!Screenshot showing how to issue an initial prompt\n2. Upscaling the bottom-left image:\n!Screenshot showing how to request upscaling an image\n3. Requesting variations of the bottom-left image:\n!Screenshot showing how to request a variation of a generated image\n\n\nDataset Format\n==============\n\n\nThe dataset was produced by scraping ten public Discord channels in the \"general\" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern 'channel-name\\_yyyy\\_mm\\_dd.json'. The '\"messages\"' field in each JSON file contains a list of Message objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See the companion notebook with utilities for extracting such information.\n\n\n\nDataset Stats\n=============\n\n\nThe dataset contains:\n\n\n* 268k messages from 10 public Discord channel collected over 28 days.\n* 248k user-generated prompts and their associated generated images, out of which:\n\t+ 60% are requests for new images (initial or variation requests for a previously-generated image), and\n\t+ 40% are requests for upscaling previously-generated images.\n\n\nPrompt Analysis\n===============\n\n\nHere are the most prominent phrases among the user-generated text prompts:\n!word cloud\n\n\nPrompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens:\n!prompt lengths\n\n\nSee the the companion notebook for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.).\n\n\nSample Use Case\n===============\n\n\nOne way of leveraging this dataset is to help address the prompt engineering problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. This notebook shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at succinctly/midjourney-prompts, and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at succinctly/text2image-prompt-generator.\n\n\nHere is how our model can help brainstorm creative prompts and speed up prompt engineering:\n!prompt autocomplete model\n\n\nAuthors\n=======\n\n\nThis project was a collaboration between Iulia Turc and Gaurav Nemade. We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at URL.\n\n\nInteresting Finds\n=================\n\n\nHere are some of the generated images that drew our attention:", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was shared by @succinctlyai", "### Licensing Information\n\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "### Dataset Summary\n\n\nGeneral Context\n===============\n\n\nMidjourney is an independent research lab whose broad mission is to \"explore new mediums of thought\". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public Discord server, where users interact with a Midjourney bot. When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images.\n\n\nThis dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below).\n\n\nMidjourney's Discord Server\n---------------------------\n\n\nHere is what the interaction with the Midjourney bot looks like on Discord:\n\n\n1. Issuing an initial prompt:\n!Screenshot showing how to issue an initial prompt\n2. Upscaling the bottom-left image:\n!Screenshot showing how to request upscaling an image\n3. Requesting variations of the bottom-left image:\n!Screenshot showing how to request a variation of a generated image\n\n\nDataset Format\n==============\n\n\nThe dataset was produced by scraping ten public Discord channels in the \"general\" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern 'channel-name\\_yyyy\\_mm\\_dd.json'. The '\"messages\"' field in each JSON file contains a list of Message objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See the companion notebook with utilities for extracting such information.\n\n\n\nDataset Stats\n=============\n\n\nThe dataset contains:\n\n\n* 268k messages from 10 public Discord channel collected over 28 days.\n* 248k user-generated prompts and their associated generated images, out of which:\n\t+ 60% are requests for new images (initial or variation requests for a previously-generated image), and\n\t+ 40% are requests for upscaling previously-generated images.\n\n\nPrompt Analysis\n===============\n\n\nHere are the most prominent phrases among the user-generated text prompts:\n!word cloud\n\n\nPrompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens:\n!prompt lengths\n\n\nSee the the companion notebook for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.).\n\n\nSample Use Case\n===============\n\n\nOne way of leveraging this dataset is to help address the prompt engineering problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. This notebook shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at succinctly/midjourney-prompts, and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at succinctly/text2image-prompt-generator.\n\n\nHere is how our model can help brainstorm creative prompts and speed up prompt engineering:\n!prompt autocomplete model\n\n\nAuthors\n=======\n\n\nThis project was a collaboration between Iulia Turc and Gaurav Nemade. We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at URL.\n\n\nInteresting Finds\n=================\n\n\nHere are some of the generated images that drew our attention:", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was shared by @succinctlyai", "### Licensing Information\n\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
1cea5d99551c5817ca98c404c39b8846f04a3a12
# spanish-tweets ## A big corpus of tweets for pretraining embeddings and language models ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage**: https://github.com/pysentimiento/robertuito - **Paper**: [RoBERTuito: a pre-trained language model for social media text in Spanish](https://aclanthology.org/2022.lrec-1.785/) - **Point of Contact:** jmperez (at) dc.uba.ar ### Dataset Summary A big dataset of (mostly) Spanish tweets for pre-training language models (or other representations). ### Supported Tasks and Leaderboards Language Modeling ### Languages Mostly Spanish, but some Portuguese, English, and other languages. ## Dataset Structure ### Data Fields - *tweet_id*: id of the tweet - *user_id*: id of the user - *text*: text from the tweet ## Dataset Creation The full process of data collection is described in the paper. Here we roughly outline the main points: - A Spritzer collection uploaded to Archive.org dating from May 2019 was downloaded - From this, we only kept tweets with language metadata equal to Spanish, and mark the users who posted these messages. - Then, the tweetline from each of these marked users was downloaded. This corpus consists of 622M tweets from around 432K users. Please note that we did not filter tweets from other languages, so you might find English, Portuguese, Catalan and other languages in the dataset (around 7/8% of the tweets are not in Spanish) ### Citation Information ``` @inproceedings{perez-etal-2022-robertuito, title = "{R}o{BERT}uito: a pre-trained language model for social media text in {S}panish", author = "P{\'e}rez, Juan Manuel and Furman, Dami{\'a}n Ariel and Alonso Alemany, Laura and Luque, Franco M.", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.785", pages = "7235--7243", abstract = "Since BERT appeared, Transformer language models and transfer learning have become state-of-the-art for natural language processing tasks. Recently, some works geared towards pre-training specially-crafted models for particular domains, such as scientific papers, medical documents, user-generated texts, among others. These domain-specific models have been shown to improve performance significantly in most tasks; however, for languages other than English, such models are not widely available. In this work, we present RoBERTuito, a pre-trained language model for user-generated text in Spanish, trained on over 500 million tweets. Experiments on a benchmark of tasks involving user-generated text showed that RoBERTuito outperformed other pre-trained language models in Spanish. In addition to this, our model has some cross-lingual abilities, achieving top results for English-Spanish tasks of the Linguistic Code-Switching Evaluation benchmark (LinCE) and also competitive performance against monolingual models in English Twitter tasks. To facilitate further research, we make RoBERTuito publicly available at the HuggingFace model hub together with the dataset used to pre-train it.", } ```
pysentimiento/spanish-tweets
[ "language:es", "region:us" ]
2022-09-08T20:02:38+00:00
{"language": "es", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "tweet_id", "dtype": "string"}, {"name": "user_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82649695458, "num_examples": 597433111}, {"name": "test", "num_bytes": 892219251, "num_examples": 6224733}], "download_size": 51737237106, "dataset_size": 83541914709}}
2023-07-13T14:44:41+00:00
[]
[ "es" ]
TAGS #language-Spanish #region-us
# spanish-tweets ## A big corpus of tweets for pretraining embeddings and language models ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Creation - Curation Rationale - Source Data - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Paper: RoBERTuito: a pre-trained language model for social media text in Spanish - Point of Contact: jmperez (at) URL ### Dataset Summary A big dataset of (mostly) Spanish tweets for pre-training language models (or other representations). ### Supported Tasks and Leaderboards Language Modeling ### Languages Mostly Spanish, but some Portuguese, English, and other languages. ## Dataset Structure ### Data Fields - *tweet_id*: id of the tweet - *user_id*: id of the user - *text*: text from the tweet ## Dataset Creation The full process of data collection is described in the paper. Here we roughly outline the main points: - A Spritzer collection uploaded to URL dating from May 2019 was downloaded - From this, we only kept tweets with language metadata equal to Spanish, and mark the users who posted these messages. - Then, the tweetline from each of these marked users was downloaded. This corpus consists of 622M tweets from around 432K users. Please note that we did not filter tweets from other languages, so you might find English, Portuguese, Catalan and other languages in the dataset (around 7/8% of the tweets are not in Spanish)
[ "# spanish-tweets", "## A big corpus of tweets for pretraining embeddings and language models", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Paper: RoBERTuito: a pre-trained language model for social media text in Spanish\n- Point of Contact: jmperez (at) URL", "### Dataset Summary\n\nA big dataset of (mostly) Spanish tweets for pre-training language models (or other representations).", "### Supported Tasks and Leaderboards\n\nLanguage Modeling", "### Languages\n\nMostly Spanish, but some Portuguese, English, and other languages.", "## Dataset Structure", "### Data Fields\n\n- *tweet_id*: id of the tweet\n- *user_id*: id of the user\n- *text*: text from the tweet", "## Dataset Creation\n\nThe full process of data collection is described in the paper. Here we roughly outline the main points:\n\n- A Spritzer collection uploaded to URL dating from May 2019 was downloaded\n- From this, we only kept tweets with language metadata equal to Spanish, and mark the users who posted these messages.\n- Then, the tweetline from each of these marked users was downloaded.\n\n\nThis corpus consists of 622M tweets from around 432K users.\n\nPlease note that we did not filter tweets from other languages, so you might find English, Portuguese, Catalan and other languages in the dataset (around 7/8% of the tweets are not in Spanish)" ]
[ "TAGS\n#language-Spanish #region-us \n", "# spanish-tweets", "## A big corpus of tweets for pretraining embeddings and language models", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Paper: RoBERTuito: a pre-trained language model for social media text in Spanish\n- Point of Contact: jmperez (at) URL", "### Dataset Summary\n\nA big dataset of (mostly) Spanish tweets for pre-training language models (or other representations).", "### Supported Tasks and Leaderboards\n\nLanguage Modeling", "### Languages\n\nMostly Spanish, but some Portuguese, English, and other languages.", "## Dataset Structure", "### Data Fields\n\n- *tweet_id*: id of the tweet\n- *user_id*: id of the user\n- *text*: text from the tweet", "## Dataset Creation\n\nThe full process of data collection is described in the paper. Here we roughly outline the main points:\n\n- A Spritzer collection uploaded to URL dating from May 2019 was downloaded\n- From this, we only kept tweets with language metadata equal to Spanish, and mark the users who posted these messages.\n- Then, the tweetline from each of these marked users was downloaded.\n\n\nThis corpus consists of 622M tweets from around 432K users.\n\nPlease note that we did not filter tweets from other languages, so you might find English, Portuguese, Catalan and other languages in the dataset (around 7/8% of the tweets are not in Spanish)" ]
e0c29cfa541e8a082ce6ee1c9bec75d37333a98d
# Dataset Card for Midjourney User Prompts & Generated Images (250k) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/succinctlyai/midjourney-texttoimage - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary General Context === [Midjourney](https://midjourney.com) is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public [Discord server](https://discord.com/invite/midjourney), where users interact with a [Midjourney bot](https://midjourney.gitbook.io/docs/#create-your-first-image). When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images. This dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below). Midjourney's Discord Server --- Here is what the interaction with the Midjourney bot looks like on Discord: 1. Issuing an initial prompt: ![Screenshot showing how to issue an initial prompt](https://drive.google.com/uc?export=view&id=1k6BuaJNWThCr1x2Ezojx3fAmDIyeZhbp "Result of issuing an initial prompt") 2. Upscaling the bottom-left image: ![Screenshot showing how to request upscaling an image](https://drive.google.com/uc?export=view&id=15Y65Fe0eVKVPK5YOul0ZndLuqo4Lg4xk "Result of upscaling an image") 3. Requesting variations of the bottom-left image: ![Screenshot showing how to request a variation of a generated image](https://drive.google.com/uc?export=view&id=1-9kw69PgM5eIM5n1dir4lQqGCn_hJfOA "Result of requesting a variation of an image") Dataset Format === The dataset was produced by scraping ten public Discord channels in the "general" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern `channel-name_yyyy_mm_dd.json`. The `"messages"` field in each JSON file contains a list of [Message](https://discord.com/developers/docs/resources/channel#message-object) objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) with utilities for extracting such information. | User Prompt | Generated Image URL | | --- | --- | | anatomical heart fill with deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989673529102463016/f14d5cb4-aa4d-4060-b017-5ee6c1db42d6_Ko_anatomical_heart_fill_with_deers_neon_pastel_artstation.png | | anatomical heart fill with jumping running deers, neon, pastel, artstation | https://cdn.discordapp.com/attachments/985204969722486814/989675045439815721/1d7541f2-b659-4a74-86a3-ae211918723c_Ko_anatomical_heart_fill_with_jumping_running_deers_neon_pastel_artstation.png | | https://s.mj.run/UlkFmVAKfaE cat with many eyes floating in colorful glowing swirling whisps, occult inspired, emerging from the void, shallow depth of field | https://cdn.discordapp.com/attachments/982990243621908480/988957623229501470/6116dc5f-64bb-4afb-ba5f-95128645c247_MissTwistedRose_cat_with_many_eyes_floating_in_colorful_glowing_swirling_whisps_occult_inspired_emerging_from_the_vo.png | Dataset Stats === The dataset contains: - **268k** messages from 10 public Discord channel collected over 28 days. - **248k** user-generated prompts and their associated generated images, out of which: + 60% are requests for new images (initial or variation requests for a previously-generated image), and + 40% are requests for upscaling previously-generated images. Prompt Analysis === Here are the most prominent phrases among the user-generated text prompts: ![word cloud](https://drive.google.com/uc?export=view&id=1J432wrecf2zibDFU5sT3BXFxqmt3PJ-P) Prompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens: ![prompt lengths](https://drive.google.com/uc?export=view&id=1fFObFvcWwOEGJ3k47G4fzIHZXmxS3RiW) See the [the companion notebook](https://www.kaggle.com/succinctlyai/midjourney-prompt-analysis) for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.). Sample Use Case === One way of leveraging this dataset is to help address the [prompt engineering](https://www.wired.com/story/dalle-art-curation-artificial-intelligence/) problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. [This notebook](https://www.kaggle.com/code/succinctlyai/midjourney-text-prompts-huggingface) shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at [succinctly/midjourney-prompts](https://huggingface.co/datasets/succinctly/midjourney-prompts), and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at [succinctly/text2image-prompt-generator](https://huggingface.co/succinctly/text2image-prompt-generator). Here is how our model can help brainstorm creative prompts and speed up prompt engineering: ![prompt autocomplete model](https://drive.google.com/uc?export=view&id=1JqZ-CaWNpQ4iO0Qcd3b8u_QnBp-Q0PKu) Authors === This project was a collaboration between [Iulia Turc](https://twitter.com/IuliaTurc) and [Gaurav Nemade](https://twitter.com/gaurav_nemade15). We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at [succinctly.ai](https://succinctly.ai). Interesting Finds === Here are some of the generated images that drew our attention: | User Prompt | Generated Image | | --- | --- | | https://s.mj.run/JlwNbH Historic Ensemble of the Potala Palace Lhasa, japanese style painting,trending on artstation, temple, architecture, fiction, sci-fi, underwater city, Atlantis , cyberpunk style, 8k revolution, Aokigahara fall background , dramatic lighting, epic, photorealistic, in his lowest existential moment with high detail, trending on artstation,cinematic light, volumetric shading ,high radiosity , high quality, form shadow, rim lights , concept art of architecture, 3D,hyper deatiled,very high quality,8k,Maxon cinema,visionary,imaginary,realistic,as trending on the imagination of Gustave Doré idea,perspective view,ornate light --w 1920 --h 1024 | ![palace](https://drive.google.com/uc?export=view&id=1xl2Gr1TSWCh0p_8o_wJnQIsO1qxW02Z_) | | a dark night with fog in a metropolis of tomorrow by hugh ferriss:, epic composition, maximum detail, Westworld, Elysium space station, space craft shuttle, star trek enterprise interior, moody, peaceful, hyper detailed, neon lighting, populated, minimalist design, monochromatic, rule of thirds, photorealistic, alien world, concept art, sci-fi, artstation, photorealistic, arch viz , volumetric light moody cinematic epic, 3d render, octane render, trending on artstation, in the style of dylan cole + syd mead + by zaha hadid, zaha hadid architecture + reaction-diffusion + poly-symmetric + parametric modelling, open plan, minimalist design 4k --ar 3:1 | ![metropolis](https://drive.google.com/uc?export=view&id=16A-VtlbSZCaUFiA6CZQzevPgBGyBiXWI) | | https://s.mj.run/qKj8n0 fantasy art, hyperdetailed, panoramic view, foreground is a crowd of ancient Aztec robots are doing street dance battle , main part is middleground is majestic elegant Gundam mecha robot design with black power armor and unsettling ancient Aztec plumes and decorations scary looking with two magical neon swords combat fighting::2 , background is at night with nebula eruption, Rembrandt lighting, global illumination, high details, hyper quality, unreal negine, octane render, arnold render, vray render, photorealistic, 8k --ar 3:1 --no dof,blur,bokeh | ![ancient](https://drive.google.com/uc?export=view&id=1a3jI3eiQwLbulaSS2-l1iGJ6-kokMMvc) | | https://s.mj.run/zMIhrKBDBww in side a Amethyst geode cave, 8K symmetrical portrait, trending in artstation, epic, fantasy, Klimt, Monet, clean brush stroke, realistic highly detailed, wide angle view, 8k post-processing highly detailed, moody lighting rendered by octane engine, artstation,cinematic lighting, intricate details, 8k detail post processing, --no face --w 512 --h 256 | ![cave](https://drive.google.com/uc?export=view&id=1gUx-3drfCBBFha8Hoal4Ly4efDXSrxlB) | | https://s.mj.run/GTuMoq whimsically designed gothic, interior of a baroque cathedral in fire with moths and birds flying, rain inside, with angels, beautiful woman dressed with lace victorian and plague mask, moody light, 8K photgraphy trending on shotdeck, cinema lighting, simon stålenhag, hyper realistic octane render, octane render, 4k post processing is very detailed, moody lighting, Maya+V-Ray +metal art+ extremely detailed, beautiful, unreal engine, lovecraft, Big Bang cosmology in LSD+IPAK,4K, beatiful art by Lêon François Comerre, ashley wood, craig mullins, ,outer space view, William-Adolphe Bouguereau, Rosetti --w 1040 --h 2080 | ![gothic](https://drive.google.com/uc?export=view&id=1nmsTEdPEbvDq9SLnyjjw3Pb8Eb-C1WaP) | ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@succinctlyai](https://kaggle.com/succinctlyai) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/midjourney-texttoimage-new
[ "license:cc0-1.0", "region:us" ]
2022-09-08T20:17:45+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "succinctlyai/midjourney-texttoimage"}
2022-09-08T20:22:05+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
Dataset Card for Midjourney User Prompts & Generated Images (250k) ================================================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary General Context =============== Midjourney is an independent research lab whose broad mission is to "explore new mediums of thought". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public Discord server, where users interact with a Midjourney bot. When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images. This dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below). Midjourney's Discord Server --------------------------- Here is what the interaction with the Midjourney bot looks like on Discord: 1. Issuing an initial prompt: !Screenshot showing how to issue an initial prompt 2. Upscaling the bottom-left image: !Screenshot showing how to request upscaling an image 3. Requesting variations of the bottom-left image: !Screenshot showing how to request a variation of a generated image Dataset Format ============== The dataset was produced by scraping ten public Discord channels in the "general" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern 'channel-name\_yyyy\_mm\_dd.json'. The '"messages"' field in each JSON file contains a list of Message objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See the companion notebook with utilities for extracting such information. Dataset Stats ============= The dataset contains: * 268k messages from 10 public Discord channel collected over 28 days. * 248k user-generated prompts and their associated generated images, out of which: + 60% are requests for new images (initial or variation requests for a previously-generated image), and + 40% are requests for upscaling previously-generated images. Prompt Analysis =============== Here are the most prominent phrases among the user-generated text prompts: !word cloud Prompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens: !prompt lengths See the the companion notebook for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.). Sample Use Case =============== One way of leveraging this dataset is to help address the prompt engineering problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. This notebook shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at succinctly/midjourney-prompts, and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at succinctly/text2image-prompt-generator. Here is how our model can help brainstorm creative prompts and speed up prompt engineering: !prompt autocomplete model Authors ======= This project was a collaboration between Iulia Turc and Gaurav Nemade. We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at URL. Interesting Finds ================= Here are some of the generated images that drew our attention: ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances ### Data Fields ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators This dataset was shared by @succinctlyai ### Licensing Information The license for this dataset is cc0-1.0 ### Contributions
[ "### Dataset Summary\n\n\nGeneral Context\n===============\n\n\nMidjourney is an independent research lab whose broad mission is to \"explore new mediums of thought\". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public Discord server, where users interact with a Midjourney bot. When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images.\n\n\nThis dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below).\n\n\nMidjourney's Discord Server\n---------------------------\n\n\nHere is what the interaction with the Midjourney bot looks like on Discord:\n\n\n1. Issuing an initial prompt:\n!Screenshot showing how to issue an initial prompt\n2. Upscaling the bottom-left image:\n!Screenshot showing how to request upscaling an image\n3. Requesting variations of the bottom-left image:\n!Screenshot showing how to request a variation of a generated image\n\n\nDataset Format\n==============\n\n\nThe dataset was produced by scraping ten public Discord channels in the \"general\" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern 'channel-name\\_yyyy\\_mm\\_dd.json'. The '\"messages\"' field in each JSON file contains a list of Message objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See the companion notebook with utilities for extracting such information.\n\n\n\nDataset Stats\n=============\n\n\nThe dataset contains:\n\n\n* 268k messages from 10 public Discord channel collected over 28 days.\n* 248k user-generated prompts and their associated generated images, out of which:\n\t+ 60% are requests for new images (initial or variation requests for a previously-generated image), and\n\t+ 40% are requests for upscaling previously-generated images.\n\n\nPrompt Analysis\n===============\n\n\nHere are the most prominent phrases among the user-generated text prompts:\n!word cloud\n\n\nPrompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens:\n!prompt lengths\n\n\nSee the the companion notebook for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.).\n\n\nSample Use Case\n===============\n\n\nOne way of leveraging this dataset is to help address the prompt engineering problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. This notebook shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at succinctly/midjourney-prompts, and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at succinctly/text2image-prompt-generator.\n\n\nHere is how our model can help brainstorm creative prompts and speed up prompt engineering:\n!prompt autocomplete model\n\n\nAuthors\n=======\n\n\nThis project was a collaboration between Iulia Turc and Gaurav Nemade. We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at URL.\n\n\nInteresting Finds\n=================\n\n\nHere are some of the generated images that drew our attention:", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was shared by @succinctlyai", "### Licensing Information\n\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "### Dataset Summary\n\n\nGeneral Context\n===============\n\n\nMidjourney is an independent research lab whose broad mission is to \"explore new mediums of thought\". In 2022, they launched a text-to-image service that, given a natural language prompt, produces visual depictions that are faithful to the description. Their service is accessible via a public Discord server, where users interact with a Midjourney bot. When issued a query in natural language, the bot returns four low-resolution images and offers further options like upscaling or re-generating a variation of the original images.\n\n\nThis dataset was obtained by scraping messages from the public Discord server over a period of four weeks (June 20, 2002 - July 17, 2022). The authors have no affiliation with Midjourney and are releasing this data with the sole purpose of enabling research on text-to-image model prompting (see the Sample Use Case section below).\n\n\nMidjourney's Discord Server\n---------------------------\n\n\nHere is what the interaction with the Midjourney bot looks like on Discord:\n\n\n1. Issuing an initial prompt:\n!Screenshot showing how to issue an initial prompt\n2. Upscaling the bottom-left image:\n!Screenshot showing how to request upscaling an image\n3. Requesting variations of the bottom-left image:\n!Screenshot showing how to request a variation of a generated image\n\n\nDataset Format\n==============\n\n\nThe dataset was produced by scraping ten public Discord channels in the \"general\" category (i.e., with no dedicated topic) over four weeks. Filenames follow the pattern 'channel-name\\_yyyy\\_mm\\_dd.json'. The '\"messages\"' field in each JSON file contains a list of Message objects, one per user query. A message includes information such as the user-issued prompt, a link to the generated image, and other metadata. See the companion notebook with utilities for extracting such information.\n\n\n\nDataset Stats\n=============\n\n\nThe dataset contains:\n\n\n* 268k messages from 10 public Discord channel collected over 28 days.\n* 248k user-generated prompts and their associated generated images, out of which:\n\t+ 60% are requests for new images (initial or variation requests for a previously-generated image), and\n\t+ 40% are requests for upscaling previously-generated images.\n\n\nPrompt Analysis\n===============\n\n\nHere are the most prominent phrases among the user-generated text prompts:\n!word cloud\n\n\nPrompt lengths span from 1 to 60 whitespace-separated tokens, with the mode around 15 tokens:\n!prompt lengths\n\n\nSee the the companion notebook for an in-depth analysis of how users control various aspects of the generated images (lighting, resolution, photographic elements, artistic style, etc.).\n\n\nSample Use Case\n===============\n\n\nOne way of leveraging this dataset is to help address the prompt engineering problem: artists that use text-to-image models in their work spend a significant amount of time carefully crafting their text prompts. We built an additional model for prompt autocompletion by learning from the queries issued by Midjourney users. This notebook shows how to extract the natural language prompts from the Discord messages and create a HuggingFace dataset to be used for training. The processed dataset can be found at succinctly/midjourney-prompts, and the prompt generator (a GPT-2 model fine-tuned on prompts) is located at succinctly/text2image-prompt-generator.\n\n\nHere is how our model can help brainstorm creative prompts and speed up prompt engineering:\n!prompt autocomplete model\n\n\nAuthors\n=======\n\n\nThis project was a collaboration between Iulia Turc and Gaurav Nemade. We recently left Google Research to work on something new. Feel free to Tweet at us, or follow our journey at URL.\n\n\nInteresting Finds\n=================\n\n\nHere are some of the generated images that drew our attention:", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields", "### Data Splits\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nThis dataset was shared by @succinctlyai", "### Licensing Information\n\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
6d108e64c8f43f95c0893b67ca7a5bb2bb9904b3
# Dataset Card for Prescription-based prediction ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/roamresearch/prescriptionbasedprediction - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is the dataset used in the Roam blog post [Prescription-based prediction](http://roamanalytics.com/2016/09/13/prescription-based-prediction/). It is derived from a variety of US open health datasets, but the bulk of the data points come from the [Medicare Part D](https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Part-D-Prescriber.html) dataset and the [National Provider Identifier](https://npiregistry.cms.hhs.gov) dataset. The prescription vector for each doctor tells a rich story about that doctor's attributes, including specialty, gender, age, and region. There are 239,930 doctors in the dataset. The file is in JSONL format (one JSON record per line): <pre> { 'provider_variables': { 'brand_name_rx_count': int, 'gender': 'M' or 'F', 'generic_rx_count': int, 'region': 'South' or 'MidWest' or 'Northeast' or 'West', 'settlement_type': 'non-urban' or 'urban' 'specialty': str 'years_practicing': int }, 'npi': str, 'cms_prescription_counts': { `drug_name`: int, `drug_name`: int, ... } } </pre> The brand/generic classifications behind `brand_name_rx_count` and `generic_rx_count` are defined heuristically. For more details, see [the blog post](http://roamanalytics.com/2016/09/13/prescription-based-prediction/) or go directly to [the associated code](https://github.com/roaminsight/roamresearch/tree/master/BlogPosts/Prescription_based_prediction). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@roamresearch](https://kaggle.com/roamresearch) ### Licensing Information The license for this dataset is cc-by-nc-sa-4.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/prescriptionbasedprediction
[ "license:cc-by-nc-sa-4.0", "region:us" ]
2022-09-08T20:40:40+00:00
{"license": ["cc-by-nc-sa-4.0"], "converted_from": "kaggle", "kaggle_id": "roamresearch/prescriptionbasedprediction"}
2022-09-08T20:40:53+00:00
[]
[]
TAGS #license-cc-by-nc-sa-4.0 #region-us
# Dataset Card for Prescription-based prediction ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This is the dataset used in the Roam blog post Prescription-based prediction. It is derived from a variety of US open health datasets, but the bulk of the data points come from the Medicare Part D dataset and the National Provider Identifier dataset. The prescription vector for each doctor tells a rich story about that doctor's attributes, including specialty, gender, age, and region. There are 239,930 doctors in the dataset. The file is in JSONL format (one JSON record per line): <pre> { 'provider_variables': { 'brand_name_rx_count': int, 'gender': 'M' or 'F', 'generic_rx_count': int, 'region': 'South' or 'MidWest' or 'Northeast' or 'West', 'settlement_type': 'non-urban' or 'urban' 'specialty': str 'years_practicing': int }, 'npi': str, 'cms_prescription_counts': { 'drug_name': int, 'drug_name': int, ... } } </pre> The brand/generic classifications behind 'brand_name_rx_count' and 'generic_rx_count' are defined heuristically. For more details, see the blog post or go directly to the associated code. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @roamresearch ### Licensing Information The license for this dataset is cc-by-nc-sa-4.0 ### Contributions
[ "# Dataset Card for Prescription-based prediction", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis is the dataset used in the Roam blog post Prescription-based prediction. It is derived from a variety of US open health datasets, but the bulk of the data points come from the Medicare Part D dataset and the National Provider Identifier dataset.\n\nThe prescription vector for each doctor tells a rich story about that doctor's attributes, including specialty, gender, age, and region. There are 239,930 doctors in the dataset.\n\nThe file is in JSONL format (one JSON record per line):\n\n<pre>\n{\n 'provider_variables': \n {\n 'brand_name_rx_count': int,\n 'gender': 'M' or 'F',\n 'generic_rx_count': int,\n 'region': 'South' or 'MidWest' or 'Northeast' or 'West',\n 'settlement_type': 'non-urban' or 'urban'\n 'specialty': str\n 'years_practicing': int\n },\n 'npi': str,\n 'cms_prescription_counts':\n {\n 'drug_name': int, \n 'drug_name': int, \n ...\n }\n}\n</pre>\n\nThe brand/generic classifications behind 'brand_name_rx_count' and 'generic_rx_count' are defined heuristically.\nFor more details, see the blog post or go directly to the associated code.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @roamresearch", "### Licensing Information\n\nThe license for this dataset is cc-by-nc-sa-4.0", "### Contributions" ]
[ "TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n", "# Dataset Card for Prescription-based prediction", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis is the dataset used in the Roam blog post Prescription-based prediction. It is derived from a variety of US open health datasets, but the bulk of the data points come from the Medicare Part D dataset and the National Provider Identifier dataset.\n\nThe prescription vector for each doctor tells a rich story about that doctor's attributes, including specialty, gender, age, and region. There are 239,930 doctors in the dataset.\n\nThe file is in JSONL format (one JSON record per line):\n\n<pre>\n{\n 'provider_variables': \n {\n 'brand_name_rx_count': int,\n 'gender': 'M' or 'F',\n 'generic_rx_count': int,\n 'region': 'South' or 'MidWest' or 'Northeast' or 'West',\n 'settlement_type': 'non-urban' or 'urban'\n 'specialty': str\n 'years_practicing': int\n },\n 'npi': str,\n 'cms_prescription_counts':\n {\n 'drug_name': int, \n 'drug_name': int, \n ...\n }\n}\n</pre>\n\nThe brand/generic classifications behind 'brand_name_rx_count' and 'generic_rx_count' are defined heuristically.\nFor more details, see the blog post or go directly to the associated code.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @roamresearch", "### Licensing Information\n\nThe license for this dataset is cc-by-nc-sa-4.0", "### Contributions" ]
6bba8e2773773739878a9e5ab1d8e10b8733260f
# Dataset Card for World Happiness Report ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/unsdsn/world-happiness - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary ### Context The World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness. ### Content The happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others. ### Inspiration What countries or regions rank the highest in overall happiness and each of the six factors contributing to happiness? How did country ranks or scores change between the 2015 and 2016 as well as the 2016 and 2017 reports? Did any country experience a significant increase or decrease in happiness? **What is Dystopia?** Dystopia is an imaginary country that has the world’s least-happy people. The purpose in establishing Dystopia is to have a benchmark against which all countries can be favorably compared (no country performs more poorly than Dystopia) in terms of each of the six key variables, thus allowing each sub-bar to be of positive width. The lowest scores observed for the six key variables, therefore, characterize Dystopia. Since life would be very unpleasant in a country with the world’s lowest incomes, lowest life expectancy, lowest generosity, most corruption, least freedom and least social support, it is referred to as “Dystopia,” in contrast to Utopia. **What are the residuals?** The residuals, or unexplained components, differ for each country, reflecting the extent to which the six variables either over- or under-explain average 2014-2016 life evaluations. These residuals have an average value of approximately zero over the whole set of countries. Figure 2.2 shows the average residual for each country when the equation in Table 2.1 is applied to average 2014- 2016 data for the six variables in that country. We combine these residuals with the estimate for life evaluations in Dystopia so that the combined bar will always have positive values. As can be seen in Figure 2.2, although some life evaluation residuals are quite large, occasionally exceeding one point on the scale from 0 to 10, they are always much smaller than the calculated value in Dystopia, where the average life is rated at 1.85 on the 0 to 10 scale. **What do the columns succeeding the Happiness Score(like Family, Generosity, etc.) describe?** The following columns: GDP per Capita, Family, Life Expectancy, Freedom, Generosity, Trust Government Corruption describe the extent to which these factors contribute in evaluating the happiness in each country. The Dystopia Residual metric actually is the Dystopia Happiness Score(1.85) + the Residual value or the unexplained value for each country as stated in the previous answer. If you add all these factors up, you get the happiness score so it might be un-reliable to model them to predict Happiness Scores. #[Start a new kernel][1] [1]: https://www.kaggle.com/unsdsn/world-happiness/kernels?modal=true ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@unsdsn](https://kaggle.com/unsdsn) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/world-happiness
[ "license:cc0-1.0", "region:us" ]
2022-09-08T20:51:07+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "unsdsn/world-happiness"}
2022-09-08T20:51:15+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# Dataset Card for World Happiness Report ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Context The World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness. ### Content The happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others. ### Inspiration What countries or regions rank the highest in overall happiness and each of the six factors contributing to happiness? How did country ranks or scores change between the 2015 and 2016 as well as the 2016 and 2017 reports? Did any country experience a significant increase or decrease in happiness? What is Dystopia? Dystopia is an imaginary country that has the world’s least-happy people. The purpose in establishing Dystopia is to have a benchmark against which all countries can be favorably compared (no country performs more poorly than Dystopia) in terms of each of the six key variables, thus allowing each sub-bar to be of positive width. The lowest scores observed for the six key variables, therefore, characterize Dystopia. Since life would be very unpleasant in a country with the world’s lowest incomes, lowest life expectancy, lowest generosity, most corruption, least freedom and least social support, it is referred to as “Dystopia,” in contrast to Utopia. What are the residuals? The residuals, or unexplained components, differ for each country, reflecting the extent to which the six variables either over- or under-explain average 2014-2016 life evaluations. These residuals have an average value of approximately zero over the whole set of countries. Figure 2.2 shows the average residual for each country when the equation in Table 2.1 is applied to average 2014- 2016 data for the six variables in that country. We combine these residuals with the estimate for life evaluations in Dystopia so that the combined bar will always have positive values. As can be seen in Figure 2.2, although some life evaluation residuals are quite large, occasionally exceeding one point on the scale from 0 to 10, they are always much smaller than the calculated value in Dystopia, where the average life is rated at 1.85 on the 0 to 10 scale. What do the columns succeeding the Happiness Score(like Family, Generosity, etc.) describe? The following columns: GDP per Capita, Family, Life Expectancy, Freedom, Generosity, Trust Government Corruption describe the extent to which these factors contribute in evaluating the happiness in each country. The Dystopia Residual metric actually is the Dystopia Happiness Score(1.85) + the Residual value or the unexplained value for each country as stated in the previous answer. If you add all these factors up, you get the happiness score so it might be un-reliable to model them to predict Happiness Scores. #[Start a new kernel][1] [1]: URL ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @unsdsn ### Licensing Information The license for this dataset is cc0-1.0 ### Contributions
[ "# Dataset Card for World Happiness Report", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Context \n\nThe World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness.", "### Content\n\nThe happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others.", "### Inspiration\n\nWhat countries or regions rank the highest in overall happiness and each of the six factors contributing to happiness? How did country ranks or scores change between the 2015 and 2016 as well as the 2016 and 2017 reports? Did any country experience a significant increase or decrease in happiness?\n\nWhat is Dystopia?\n\nDystopia is an imaginary country that has the world’s least-happy people. The purpose in establishing Dystopia is to have a benchmark against which all countries can be favorably compared (no country performs more poorly than Dystopia) in terms of each of the six key variables, thus allowing each sub-bar to be of positive width. The lowest scores observed for the six key variables, therefore, characterize Dystopia. Since life would be very unpleasant in a country with the world’s lowest incomes, lowest life expectancy, lowest generosity, most corruption, least freedom and least social support, it is referred to as “Dystopia,” in contrast to Utopia.\n\nWhat are the residuals?\n\nThe residuals, or unexplained components, differ for each country, reflecting the extent to which the six variables either over- or under-explain average 2014-2016 life evaluations. These residuals have an average value of approximately zero over the whole set of countries. Figure 2.2 shows the average residual for each country when the equation in Table 2.1 is applied to average 2014- 2016 data for the six variables in that country. We combine these residuals with the estimate for life evaluations in Dystopia so that the combined bar will always have positive values. As can be seen in Figure 2.2, although some life evaluation residuals are quite large, occasionally exceeding one point on the scale from 0 to 10, they are always much smaller than the calculated value in Dystopia, where the average life is rated at 1.85 on the 0 to 10 scale.\n\nWhat do the columns succeeding the Happiness Score(like Family, Generosity, etc.) describe?\n\nThe following columns: GDP per Capita, Family, Life Expectancy, Freedom, Generosity, Trust Government Corruption describe the extent to which these factors contribute in evaluating the happiness in each country. \nThe Dystopia Residual metric actually is the Dystopia Happiness Score(1.85) + the Residual value or the unexplained value for each country as stated in the previous answer.\n\nIf you add all these factors up, you get the happiness score so it might be un-reliable to model them to predict Happiness Scores.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @unsdsn", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# Dataset Card for World Happiness Report", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Context \n\nThe World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, the third in 2015, and the fourth in the 2016 Update. The World Happiness 2017, which ranks 155 countries by their happiness levels, was released at the United Nations at an event celebrating International Day of Happiness on March 20th. The report continues to gain global recognition as governments, organizations and civil society increasingly use happiness indicators to inform their policy-making decisions. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness.", "### Content\n\nThe happiness scores and rankings use data from the Gallup World Poll. The scores are based on answers to the main life evaluation question asked in the poll. This question, known as the Cantril ladder, asks respondents to think of a ladder with the best possible life for them being a 10 and the worst possible life being a 0 and to rate their own current lives on that scale. The scores are from nationally representative samples for the years 2013-2016 and use the Gallup weights to make the estimates representative. The columns following the happiness score estimate the extent to which each of six factors – economic production, social support, life expectancy, freedom, absence of corruption, and generosity – contribute to making life evaluations higher in each country than they are in Dystopia, a hypothetical country that has values equal to the world’s lowest national averages for each of the six factors. They have no impact on the total score reported for each country, but they do explain why some countries rank higher than others.", "### Inspiration\n\nWhat countries or regions rank the highest in overall happiness and each of the six factors contributing to happiness? How did country ranks or scores change between the 2015 and 2016 as well as the 2016 and 2017 reports? Did any country experience a significant increase or decrease in happiness?\n\nWhat is Dystopia?\n\nDystopia is an imaginary country that has the world’s least-happy people. The purpose in establishing Dystopia is to have a benchmark against which all countries can be favorably compared (no country performs more poorly than Dystopia) in terms of each of the six key variables, thus allowing each sub-bar to be of positive width. The lowest scores observed for the six key variables, therefore, characterize Dystopia. Since life would be very unpleasant in a country with the world’s lowest incomes, lowest life expectancy, lowest generosity, most corruption, least freedom and least social support, it is referred to as “Dystopia,” in contrast to Utopia.\n\nWhat are the residuals?\n\nThe residuals, or unexplained components, differ for each country, reflecting the extent to which the six variables either over- or under-explain average 2014-2016 life evaluations. These residuals have an average value of approximately zero over the whole set of countries. Figure 2.2 shows the average residual for each country when the equation in Table 2.1 is applied to average 2014- 2016 data for the six variables in that country. We combine these residuals with the estimate for life evaluations in Dystopia so that the combined bar will always have positive values. As can be seen in Figure 2.2, although some life evaluation residuals are quite large, occasionally exceeding one point on the scale from 0 to 10, they are always much smaller than the calculated value in Dystopia, where the average life is rated at 1.85 on the 0 to 10 scale.\n\nWhat do the columns succeeding the Happiness Score(like Family, Generosity, etc.) describe?\n\nThe following columns: GDP per Capita, Family, Life Expectancy, Freedom, Generosity, Trust Government Corruption describe the extent to which these factors contribute in evaluating the happiness in each country. \nThe Dystopia Residual metric actually is the Dystopia Happiness Score(1.85) + the Residual value or the unexplained value for each country as stated in the previous answer.\n\nIf you add all these factors up, you get the happiness score so it might be un-reliable to model them to predict Happiness Scores.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @unsdsn", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
c614e40ca0c9a5b6ba8553754158652a1156f694
stable-diffusion-discord-prompts All messages from dreambot from all dream-[1-50] channels in stable-diffusion discord source: https://github.com/bartman081523/stable-diffusion-discord-prompts
neuralworm/stable-diffusion-discord-prompts
[ "region:us" ]
2022-09-09T02:32:22+00:00
{}
2022-09-15T02:52:04+00:00
[]
[]
TAGS #region-us
stable-diffusion-discord-prompts All messages from dreambot from all dream-[1-50] channels in stable-diffusion discord source: URL
[]
[ "TAGS\n#region-us \n" ]
00d53922bad2faab09916b1b83c6be5bf6bd9e96
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116209
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T08:47:59+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
0d656ce2d05249f8bc06a3048a577ce1cb9eb4b7
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_sumpubmed * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116210
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:17+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_sumpubmed", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:44:55+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_sumpubmed * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_sumpubmed\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_sumpubmed\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
2554e99bf5d02a551aebe4b0d2fb9276e7ebc8c5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116211
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:21+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T20:07:42+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
02b9c6352eba657cc3bade52d89764a539b711f9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116212
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:54:17+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
0e900883ed246d6237128ebd68ff98e0e1caf78f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116213
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T17:13:00+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
2bf032fc8926b7e424852caef15844242b4888fc
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11 * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116214
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:38+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T18:49:27+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11 * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
a757cd2381b43a4b03146acdfe34722d8968ba78
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_baseline_sumpubmed_nolenpen * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116215
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:44+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_baseline_sumpubmed_nolenpen", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T04:26:50+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_baseline_sumpubmed_nolenpen * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_baseline_sumpubmed_nolenpen\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_baseline_sumpubmed_nolenpen\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
3c6e630b83d5ad560f90b0cee9027ec8f754a59e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116216
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:50+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_scitldr", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T04:44:46+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_scitldr\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_scitldr\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
72bd968d199b079c4a66863ab4844def3e05042c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-cd8e90-16116217
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T02:37:56+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:13:14+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
3ba67d037d51a119698f136ecf0592d88a5ac6e8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126218
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:17:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:23:02+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
079bc3a029f12d1565725a76f2d83fd93be783a4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126219
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:17:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:51:42+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
19a6f6c5483163f19b0ddc4e922da5abc3b52e14
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-bigpatent * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126220
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:17:52+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-bigpatent", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:50:59+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-bigpatent * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-bigpatent\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-bigpatent\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
d74ce7aa783f47c3bb17f0259d7fee1f6a89d0e9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126221
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:17:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-pubmed", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T03:51:31+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
ab7cb615535d508799f224e10906d556ab4cfcb0
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/bigbird-pegasus-large-K-booksum * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-7b7f8a-16126222
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T03:18:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/bigbird-pegasus-large-K-booksum", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T16:51:54+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/bigbird-pegasus-large-K-booksum * Dataset: launch/gov_report * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/bigbird-pegasus-large-K-booksum\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/bigbird-pegasus-large-K-booksum\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
3e25f0d8068ff5f9a904d9afce7c4a6e9744fe10
# Dataset Card for 100 Richest People In World ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/tarundalal/100-richest-people-in-world - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains the list of Top 100 Richest People in the World Column Information:- - Name - Person Name - NetWorth - His/Her Networth - Age - Person Age - Country - The country person belongs to - Source - Information Source - Industry - Expertise Domain ### Join our Community <a href="https://discord.com/invite/kxZYxdTKp6"> <img src="https://discord.com/api/guilds/939520548726272010/widget.png?style=banner1"></a> ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@tarundalal](https://kaggle.com/tarundalal) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
nateraw/100-richest-people-in-world
[ "license:cc0-1.0", "region:us" ]
2022-09-09T04:10:55+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "tarundalal/100-richest-people-in-world"}
2022-09-09T04:10:59+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# Dataset Card for 100 Richest People In World ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset contains the list of Top 100 Richest People in the World Column Information:- - Name - Person Name - NetWorth - His/Her Networth - Age - Person Age - Country - The country person belongs to - Source - Information Source - Industry - Expertise Domain ### Join our Community <a href="URL <img src="URL ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @tarundalal ### Licensing Information The license for this dataset is cc0-1.0 ### Contributions
[ "# Dataset Card for 100 Richest People In World", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains the list of Top 100 Richest People in the World\n\nColumn Information:- \n\n- Name - Person Name\n- NetWorth - His/Her Networth\n- Age - Person Age\n- Country - The country person belongs to\n- Source - Information Source\n- Industry - Expertise Domain", "### Join our Community\n<a href=\"URL\n<img src=\"URL", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @tarundalal", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# Dataset Card for 100 Richest People In World", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains the list of Top 100 Richest People in the World\n\nColumn Information:- \n\n- Name - Person Name\n- NetWorth - His/Her Networth\n- Age - Person Age\n- Country - The country person belongs to\n- Source - Information Source\n- Industry - Expertise Domain", "### Join our Community\n<a href=\"URL\n<img src=\"URL", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @tarundalal", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
168fbd6f0754738d7166d357c6b02790752fc251
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136223
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:25+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-book-summary", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T11:39:24+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-book-summary * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-book-summary\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
eab963274de7e0edf0109b653d681cd6c6c7008a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_sumpubmed * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136224
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_sumpubmed", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:42:30+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_sumpubmed * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_sumpubmed\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_sumpubmed\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
2382ecc2f9a282294489185d349b258db8d0d58c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136225
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:35+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_scitldr", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T07:34:26+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_scitldr\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_scitldr\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
eb6c99edc51cb573d18449706847d102403dc990
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136226
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:41+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T22:50:32+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
5195b0a556f0b34cb4d57881fdb7f75d8d717119
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136227
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:42:27+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2 * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
7dcd04b3c24b999f3cdfe7648b37253672e9ce85
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136228
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:52+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T20:20:32+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
f8602deb08d2439c83c66316ab0653bb427f758d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11 * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136229
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:27:59+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T21:59:16+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11 * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
f166e325789a8af88e96df52ae986e9b1b001ef8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-2fa37c-16136230
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:28:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:03:19+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Blaise-g/longt5_tglobal_large_explanatory_baseline_scitldr\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
320a0e9a51c3bbfd7241c69021671c6bce556011
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146231
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T05:28:12+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-arxiv", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:01:53+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-arxiv * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-arxiv\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
11c3beb3ad0180fe5e34012b25a913f2bea08d6a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-bigpatent * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146232
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T06:02:16+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-bigpatent", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:35:19+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-bigpatent * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-bigpatent\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-bigpatent\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
e1be3a1fe4bac74e9cfc091131e267f71c9d3e8c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146233
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T06:04:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "google/bigbird-pegasus-large-pubmed", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:37:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: google/bigbird-pegasus-large-pubmed * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: google/bigbird-pegasus-large-pubmed\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
53f9acad369028aa1cc20fd839f32076f85287c4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: pszemraj/bigbird-pegasus-large-K-booksum * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146234
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T06:35:43+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "pszemraj/bigbird-pegasus-large-K-booksum", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T20:23:07+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: pszemraj/bigbird-pegasus-large-K-booksum * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/bigbird-pegasus-large-K-booksum\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/bigbird-pegasus-large-K-booksum\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
0c6e30c26ef7cda27ea3e5100abc8d6c3c71b9ab
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model.
autoevaluate/autoeval-staging-eval-launch__gov_report-plain_text-1abd3a-16146235
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T06:38:18+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["launch/gov_report"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["bertscore"], "dataset_name": "launch/gov_report", "dataset_config": "plain_text", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}}
2022-09-09T06:44:04+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-cnn * Dataset: launch/gov_report * Config: plain_text * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @nonchalant-nagavalli for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: launch/gov_report\n* Config: plain_text\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model." ]
21fd72693c7a977f5a13203816c20c528e39b5ac
# Dataset Card for xP3 ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** https://github.com/bigscience-workshop/xmtf - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) ### Dataset Summary > xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. - **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility. - **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3)) - **xP3 Dataset Family:** <table> <tr> <th>Name</th> <th>Explanation</th> <th>Example models</th> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/xP3x>xP3x</a></t> <td>Mixture of 17 tasks in 277 languages with English prompts</td> <td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3>xP3</a></t> <td>Mixture of 13 training tasks in 46 languages with English prompts</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a></t> <td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3all>xP3all</a></t> <td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td> <td></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/bigscience/xP3megds>xP3megds</a></t> <td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> <tr> <td><a href=https://huggingface.co/datasets/Muennighoff/P3>P3</a></t> <td>Repreprocessed version of the English-only <a href=https://huggingface.co/datasets/bigscience/P3>P3</a> with 8 training tasks</td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> </tr> </table> ## Dataset Structure ### Data Instances An example of "train" looks as follows: ```json { "inputs": "Sentence 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\nSentence 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nQuestion: Can we rewrite Sentence 1 to Sentence 2? Yes or No?", "targets": "Yes" } ``` ### Data Fields The data fields are the same among all splits: - `inputs`: the natural language input fed to the model - `targets`: the natural language target that the model has to generate ### Data Splits The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. |Language|Kilobytes|%|Samples|%| |--------|------:|-:|---:|-:| |tw|106288|0.11|265071|0.34| |bm|107056|0.11|265180|0.34| |ak|108096|0.11|265071|0.34| |eu|108112|0.11|269973|0.34| |ca|110608|0.12|271191|0.34| |fon|113072|0.12|265063|0.34| |st|114080|0.12|265063|0.34| |ki|115040|0.12|265180|0.34| |tum|116032|0.12|265063|0.34| |wo|122560|0.13|365063|0.46| |ln|126304|0.13|365060|0.46| |as|156256|0.16|265063|0.34| |or|161472|0.17|265063|0.34| |kn|165456|0.17|265063|0.34| |ml|175040|0.18|265864|0.34| |rn|192992|0.2|318189|0.4| |nso|229712|0.24|915051|1.16| |tn|235536|0.25|915054|1.16| |lg|235936|0.25|915021|1.16| |rw|249360|0.26|915043|1.16| |ts|250256|0.26|915044|1.16| |sn|252496|0.27|865056|1.1| |xh|254672|0.27|915058|1.16| |zu|263712|0.28|915061|1.16| |ny|272128|0.29|915063|1.16| |ig|325232|0.34|950097|1.2| |yo|352784|0.37|918416|1.16| |ne|393680|0.41|315754|0.4| |pa|523248|0.55|339210|0.43| |gu|560688|0.59|347499|0.44| |sw|560896|0.59|1114455|1.41| |mr|666240|0.7|417269|0.53| |bn|832720|0.88|428843|0.54| |ta|924496|0.97|410633|0.52| |te|1332912|1.4|573364|0.73| |ur|1918272|2.02|855756|1.08| |vi|3101408|3.27|1667306|2.11| |code|4330752|4.56|2707724|3.43| |hi|4393696|4.63|1543441|1.96| |zh|4589904|4.83|3560556|4.51| |id|4606288|4.85|2627392|3.33| |ar|4677264|4.93|2148955|2.72| |fr|5546688|5.84|5055942|6.41| |pt|6129584|6.46|3562772|4.52| |es|7571808|7.98|5151349|6.53| |en|37261104|39.25|31495184|39.93| |total|94941936|100.0|78883588|100.0| ## Dataset Creation ### Source Data #### Training datasets - Code Miscellaneous - [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex) - [Docstring Corpus](https://huggingface.co/datasets/teven/code_docstring_corpus) - [GreatCode](https://huggingface.co/datasets/great_code) - [State Changes](https://huggingface.co/datasets/Fraser/python-state-changes) - Closed-book QA - [Hotpot QA](https://huggingface.co/datasets/hotpot_qa) - [Trivia QA](https://huggingface.co/datasets/trivia_qa) - [Web Questions](https://huggingface.co/datasets/web_questions) - [Wiki QA](https://huggingface.co/datasets/wiki_qa) - Extractive QA - [Adversarial QA](https://huggingface.co/datasets/adversarial_qa) - [CMRC2018](https://huggingface.co/datasets/cmrc2018) - [DRCD](https://huggingface.co/datasets/clue) - [DuoRC](https://huggingface.co/datasets/duorc) - [MLQA](https://huggingface.co/datasets/mlqa) - [Quoref](https://huggingface.co/datasets/quoref) - [ReCoRD](https://huggingface.co/datasets/super_glue) - [ROPES](https://huggingface.co/datasets/ropes) - [SQuAD v2](https://huggingface.co/datasets/squad_v2) - [xQuAD](https://huggingface.co/datasets/xquad) - TyDI QA - [Primary](https://huggingface.co/datasets/khalidalt/tydiqa-primary) - [Goldp](https://huggingface.co/datasets/khalidalt/tydiqa-goldp) - Multiple-Choice QA - [ARC](https://huggingface.co/datasets/ai2_arc) - [C3](https://huggingface.co/datasets/c3) - [CoS-E](https://huggingface.co/datasets/cos_e) - [Cosmos](https://huggingface.co/datasets/cosmos) - [DREAM](https://huggingface.co/datasets/dream) - [MultiRC](https://huggingface.co/datasets/super_glue) - [OpenBookQA](https://huggingface.co/datasets/openbookqa) - [PiQA](https://huggingface.co/datasets/piqa) - [QUAIL](https://huggingface.co/datasets/quail) - [QuaRel](https://huggingface.co/datasets/quarel) - [QuaRTz](https://huggingface.co/datasets/quartz) - [QASC](https://huggingface.co/datasets/qasc) - [RACE](https://huggingface.co/datasets/race) - [SciQ](https://huggingface.co/datasets/sciq) - [Social IQA](https://huggingface.co/datasets/social_i_qa) - [Wiki Hop](https://huggingface.co/datasets/wiki_hop) - [WiQA](https://huggingface.co/datasets/wiqa) - Paraphrase Identification - [MRPC](https://huggingface.co/datasets/super_glue) - [PAWS](https://huggingface.co/datasets/paws) - [PAWS-X](https://huggingface.co/datasets/paws-x) - [QQP](https://huggingface.co/datasets/qqp) - Program Synthesis - [APPS](https://huggingface.co/datasets/codeparrot/apps) - [CodeContests](https://huggingface.co/datasets/teven/code_contests) - [JupyterCodePairs](https://huggingface.co/datasets/codeparrot/github-jupyter-text-code-pairs) - [MBPP](https://huggingface.co/datasets/Muennighoff/mbpp) - [NeuralCodeSearch](https://huggingface.co/datasets/neural_code_search) - [XLCoST](https://huggingface.co/datasets/codeparrot/xlcost-text-to-code) - Structure-to-text - [Common Gen](https://huggingface.co/datasets/common_gen) - [Wiki Bio](https://huggingface.co/datasets/wiki_bio) - Sentiment - [Amazon](https://huggingface.co/datasets/amazon_polarity) - [App Reviews](https://huggingface.co/datasets/app_reviews) - [IMDB](https://huggingface.co/datasets/imdb) - [Rotten Tomatoes](https://huggingface.co/datasets/rotten_tomatoes) - [Yelp](https://huggingface.co/datasets/yelp_review_full) - Simplification - [BiSECT](https://huggingface.co/datasets/GEM/BiSECT) - Summarization - [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail) - [Gigaword](https://huggingface.co/datasets/gigaword) - [MultiNews](https://huggingface.co/datasets/multi_news) - [SamSum](https://huggingface.co/datasets/samsum) - [Wiki-Lingua](https://huggingface.co/datasets/GEM/wiki_lingua) - [XLSum](https://huggingface.co/datasets/GEM/xlsum) - [XSum](https://huggingface.co/datasets/xsum) - Topic Classification - [AG News](https://huggingface.co/datasets/ag_news) - [DBPedia](https://huggingface.co/datasets/dbpedia_14) - [TNEWS](https://huggingface.co/datasets/clue) - [TREC](https://huggingface.co/datasets/trec) - [CSL](https://huggingface.co/datasets/clue) - Translation - [Flores-200](https://huggingface.co/datasets/Muennighoff/flores200) - [Tatoeba](https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt) - Word Sense disambiguation - [WiC](https://huggingface.co/datasets/super_glue) - [XL-WiC](https://huggingface.co/datasets/pasinit/xlwic) #### Evaluation datasets (included in [xP3all](https://huggingface.co/datasets/bigscience/xP3all) except for NLI & HumanEval) - Natural Language Inference (NLI) - [ANLI](https://huggingface.co/datasets/anli) - [CB](https://huggingface.co/datasets/super_glue) - [RTE](https://huggingface.co/datasets/super_glue) - [XNLI](https://huggingface.co/datasets/xnli) - Coreference Resolution - [Winogrande](https://huggingface.co/datasets/winogrande) - [XWinograd](https://huggingface.co/datasets/Muennighoff/xwinograd) - Program Synthesis - [HumanEval](https://huggingface.co/datasets/openai_humaneval) - Sentence Completion - [COPA](https://huggingface.co/datasets/super_glue) - [Story Cloze](https://huggingface.co/datasets/story_cloze) - [XCOPA](https://huggingface.co/datasets/xcopa) - [XStoryCloze](https://huggingface.co/datasets/Muennighoff/xstory_cloze) ## Additional Information ### Licensing Information The dataset is released under Apache 2.0. ### Citation Information ```bibtex @misc{muennighoff2022crosslingual, title={Crosslingual Generalization through Multitask Finetuning}, author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel}, year={2022}, eprint={2211.01786}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset.
bigscience/xP3megds
[ "task_categories:other", "annotations_creators:expert-generated", "annotations_creators:crowdsourced", "multilinguality:multilingual", "size_categories:100M<n<1B", "language:ak", "language:ar", "language:as", "language:bm", "language:bn", "language:ca", "language:code", "language:en", "language:es", "language:eu", "language:fon", "language:fr", "language:gu", "language:hi", "language:id", "language:ig", "language:ki", "language:kn", "language:lg", "language:ln", "language:ml", "language:mr", "language:ne", "language:nso", "language:ny", "language:or", "language:pa", "language:pt", "language:rn", "language:rw", "language:sn", "language:st", "language:sw", "language:ta", "language:te", "language:tn", "language:ts", "language:tum", "language:tw", "language:ur", "language:vi", "language:wo", "language:xh", "language:yo", "language:zh", "language:zu", "license:apache-2.0", "arxiv:2211.01786", "region:us" ]
2022-09-09T07:15:42+00:00
{"annotations_creators": ["expert-generated", "crowdsourced"], "language": ["ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"], "pretty_name": "xP3", "programming_language": ["C", "C++", "C#", "Go", "Java", "JavaScript", "Lua", "PHP", "Python", "Ruby", "Rust", "Scala", "TypeScript"]}
2023-05-30T14:52:11+00:00
[ "2211.01786" ]
[ "ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh", "zu" ]
TAGS #task_categories-other #annotations_creators-expert-generated #annotations_creators-crowdsourced #multilinguality-multilingual #size_categories-100M<n<1B #language-Akan #language-Arabic #language-Assamese #language-Bambara #language-Bengali #language-Catalan #language-code #language-English #language-Spanish #language-Basque #language-Fon #language-French #language-Gujarati #language-Hindi #language-Indonesian #language-Igbo #language-Kikuyu #language-Kannada #language-Ganda #language-Lingala #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Pedi #language-Nyanja #language-Oriya (macrolanguage) #language-Panjabi #language-Portuguese #language-Rundi #language-Kinyarwanda #language-Shona #language-Southern Sotho #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tswana #language-Tsonga #language-Tumbuka #language-Twi #language-Urdu #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Chinese #language-Zulu #license-apache-2.0 #arxiv-2211.01786 #region-us
Dataset Card for xP3 ==================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations * Additional Information + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Repository: URL * Paper: Crosslingual Generalization through Multitask Finetuning * Point of Contact: Niklas Muennighoff ### Dataset Summary > > xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot. > > > * Creation: The dataset can be recreated using instructions available here. We provide this version to save processing time and ease reproducibility. * Languages: 46 (Can be extended by recreating with more splits) * xP3 Dataset Family: Dataset Structure ----------------- ### Data Instances An example of "train" looks as follows: ### Data Fields The data fields are the same among all splits: * 'inputs': the natural language input fed to the model * 'targets': the natural language target that the model has to generate ### Data Splits The below table summarizes sizes per language (computed from the 'merged\_{lang}.jsonl' files). Due to languages like 'tw' only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. Dataset Creation ---------------- ### Source Data #### Training datasets * Code Miscellaneous + CodeComplex + Docstring Corpus + GreatCode + State Changes * Closed-book QA + Hotpot QA + Trivia QA + Web Questions + Wiki QA * Extractive QA + Adversarial QA + CMRC2018 + DRCD + DuoRC + MLQA + Quoref + ReCoRD + ROPES + SQuAD v2 + xQuAD + TyDI QA - Primary - Goldp * Multiple-Choice QA + ARC + C3 + CoS-E + Cosmos + DREAM + MultiRC + OpenBookQA + PiQA + QUAIL + QuaRel + QuaRTz + QASC + RACE + SciQ + Social IQA + Wiki Hop + WiQA * Paraphrase Identification + MRPC + PAWS + PAWS-X + QQP * Program Synthesis + APPS + CodeContests + JupyterCodePairs + MBPP + NeuralCodeSearch + XLCoST * Structure-to-text + Common Gen + Wiki Bio * Sentiment + Amazon + App Reviews + IMDB + Rotten Tomatoes + Yelp * Simplification + BiSECT * Summarization + CNN Daily Mail + Gigaword + MultiNews + SamSum + Wiki-Lingua + XLSum + XSum * Topic Classification + AG News + DBPedia + TNEWS + TREC + CSL * Translation + Flores-200 + Tatoeba * Word Sense disambiguation + WiC + XL-WiC #### Evaluation datasets (included in xP3all except for NLI & HumanEval) * Natural Language Inference (NLI) + ANLI + CB + RTE + XNLI * Coreference Resolution + Winogrande + XWinograd * Program Synthesis + HumanEval * Sentence Completion + COPA + Story Cloze + XCOPA + XStoryCloze Additional Information ---------------------- ### Licensing Information The dataset is released under Apache 2.0. ### Contributions Thanks to the contributors of promptsource for adding many prompts used in this dataset.
[ "### Dataset Summary\n\n\n\n> \n> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.\n> \n> \n> \n\n\n* Creation: The dataset can be recreated using instructions available here. We provide this version to save processing time and ease reproducibility.\n* Languages: 46 (Can be extended by recreating with more splits)\n* xP3 Dataset Family:\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of \"train\" looks as follows:", "### Data Fields\n\n\nThe data fields are the same among all splits:\n\n\n* 'inputs': the natural language input fed to the model\n* 'targets': the natural language target that the model has to generate", "### Data Splits\n\n\nThe below table summarizes sizes per language (computed from the 'merged\\_{lang}.jsonl' files). Due to languages like 'tw' only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.\n\n\n\nDataset Creation\n----------------", "### Source Data", "#### Training datasets\n\n\n* Code Miscellaneous\n\t+ CodeComplex\n\t+ Docstring Corpus\n\t+ GreatCode\n\t+ State Changes\n* Closed-book QA\n\t+ Hotpot QA\n\t+ Trivia QA\n\t+ Web Questions\n\t+ Wiki QA\n* Extractive QA\n\t+ Adversarial QA\n\t+ CMRC2018\n\t+ DRCD\n\t+ DuoRC\n\t+ MLQA\n\t+ Quoref\n\t+ ReCoRD\n\t+ ROPES\n\t+ SQuAD v2\n\t+ xQuAD\n\t+ TyDI QA\n\t\t- Primary\n\t\t- Goldp\n* Multiple-Choice QA\n\t+ ARC\n\t+ C3\n\t+ CoS-E\n\t+ Cosmos\n\t+ DREAM\n\t+ MultiRC\n\t+ OpenBookQA\n\t+ PiQA\n\t+ QUAIL\n\t+ QuaRel\n\t+ QuaRTz\n\t+ QASC\n\t+ RACE\n\t+ SciQ\n\t+ Social IQA\n\t+ Wiki Hop\n\t+ WiQA\n* Paraphrase Identification\n\t+ MRPC\n\t+ PAWS\n\t+ PAWS-X\n\t+ QQP\n* Program Synthesis\n\t+ APPS\n\t+ CodeContests\n\t+ JupyterCodePairs\n\t+ MBPP\n\t+ NeuralCodeSearch\n\t+ XLCoST\n* Structure-to-text\n\t+ Common Gen\n\t+ Wiki Bio\n* Sentiment\n\t+ Amazon\n\t+ App Reviews\n\t+ IMDB\n\t+ Rotten Tomatoes\n\t+ Yelp\n* Simplification\n\t+ BiSECT\n* Summarization\n\t+ CNN Daily Mail\n\t+ Gigaword\n\t+ MultiNews\n\t+ SamSum\n\t+ Wiki-Lingua\n\t+ XLSum\n\t+ XSum\n* Topic Classification\n\t+ AG News\n\t+ DBPedia\n\t+ TNEWS\n\t+ TREC\n\t+ CSL\n* Translation\n\t+ Flores-200\n\t+ Tatoeba\n* Word Sense disambiguation\n\t+ WiC\n\t+ XL-WiC", "#### Evaluation datasets (included in xP3all except for NLI & HumanEval)\n\n\n* Natural Language Inference (NLI)\n\t+ ANLI\n\t+ CB\n\t+ RTE\n\t+ XNLI\n* Coreference Resolution\n\t+ Winogrande\n\t+ XWinograd\n* Program Synthesis\n\t+ HumanEval\n* Sentence Completion\n\t+ COPA\n\t+ Story Cloze\n\t+ XCOPA\n\t+ XStoryCloze\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset is released under Apache 2.0.", "### Contributions\n\n\nThanks to the contributors of promptsource for adding many prompts used in this dataset." ]
[ "TAGS\n#task_categories-other #annotations_creators-expert-generated #annotations_creators-crowdsourced #multilinguality-multilingual #size_categories-100M<n<1B #language-Akan #language-Arabic #language-Assamese #language-Bambara #language-Bengali #language-Catalan #language-code #language-English #language-Spanish #language-Basque #language-Fon #language-French #language-Gujarati #language-Hindi #language-Indonesian #language-Igbo #language-Kikuyu #language-Kannada #language-Ganda #language-Lingala #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Pedi #language-Nyanja #language-Oriya (macrolanguage) #language-Panjabi #language-Portuguese #language-Rundi #language-Kinyarwanda #language-Shona #language-Southern Sotho #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tswana #language-Tsonga #language-Tumbuka #language-Twi #language-Urdu #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Chinese #language-Zulu #license-apache-2.0 #arxiv-2211.01786 #region-us \n", "### Dataset Summary\n\n\n\n> \n> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.\n> \n> \n> \n\n\n* Creation: The dataset can be recreated using instructions available here. We provide this version to save processing time and ease reproducibility.\n* Languages: 46 (Can be extended by recreating with more splits)\n* xP3 Dataset Family:\n\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example of \"train\" looks as follows:", "### Data Fields\n\n\nThe data fields are the same among all splits:\n\n\n* 'inputs': the natural language input fed to the model\n* 'targets': the natural language target that the model has to generate", "### Data Splits\n\n\nThe below table summarizes sizes per language (computed from the 'merged\\_{lang}.jsonl' files). Due to languages like 'tw' only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage.\n\n\n\nDataset Creation\n----------------", "### Source Data", "#### Training datasets\n\n\n* Code Miscellaneous\n\t+ CodeComplex\n\t+ Docstring Corpus\n\t+ GreatCode\n\t+ State Changes\n* Closed-book QA\n\t+ Hotpot QA\n\t+ Trivia QA\n\t+ Web Questions\n\t+ Wiki QA\n* Extractive QA\n\t+ Adversarial QA\n\t+ CMRC2018\n\t+ DRCD\n\t+ DuoRC\n\t+ MLQA\n\t+ Quoref\n\t+ ReCoRD\n\t+ ROPES\n\t+ SQuAD v2\n\t+ xQuAD\n\t+ TyDI QA\n\t\t- Primary\n\t\t- Goldp\n* Multiple-Choice QA\n\t+ ARC\n\t+ C3\n\t+ CoS-E\n\t+ Cosmos\n\t+ DREAM\n\t+ MultiRC\n\t+ OpenBookQA\n\t+ PiQA\n\t+ QUAIL\n\t+ QuaRel\n\t+ QuaRTz\n\t+ QASC\n\t+ RACE\n\t+ SciQ\n\t+ Social IQA\n\t+ Wiki Hop\n\t+ WiQA\n* Paraphrase Identification\n\t+ MRPC\n\t+ PAWS\n\t+ PAWS-X\n\t+ QQP\n* Program Synthesis\n\t+ APPS\n\t+ CodeContests\n\t+ JupyterCodePairs\n\t+ MBPP\n\t+ NeuralCodeSearch\n\t+ XLCoST\n* Structure-to-text\n\t+ Common Gen\n\t+ Wiki Bio\n* Sentiment\n\t+ Amazon\n\t+ App Reviews\n\t+ IMDB\n\t+ Rotten Tomatoes\n\t+ Yelp\n* Simplification\n\t+ BiSECT\n* Summarization\n\t+ CNN Daily Mail\n\t+ Gigaword\n\t+ MultiNews\n\t+ SamSum\n\t+ Wiki-Lingua\n\t+ XLSum\n\t+ XSum\n* Topic Classification\n\t+ AG News\n\t+ DBPedia\n\t+ TNEWS\n\t+ TREC\n\t+ CSL\n* Translation\n\t+ Flores-200\n\t+ Tatoeba\n* Word Sense disambiguation\n\t+ WiC\n\t+ XL-WiC", "#### Evaluation datasets (included in xP3all except for NLI & HumanEval)\n\n\n* Natural Language Inference (NLI)\n\t+ ANLI\n\t+ CB\n\t+ RTE\n\t+ XNLI\n* Coreference Resolution\n\t+ Winogrande\n\t+ XWinograd\n* Program Synthesis\n\t+ HumanEval\n* Sentence Completion\n\t+ COPA\n\t+ Story Cloze\n\t+ XCOPA\n\t+ XStoryCloze\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nThe dataset is released under Apache 2.0.", "### Contributions\n\n\nThanks to the contributors of promptsource for adding many prompts used in this dataset." ]
55cecad455f7df12b6c7c1c8c206aacc9f764e3e
# Dataset Card for COVID News Articles (2020 - 2022) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://kaggle.com/datasets/timmayer/covid-news-articles-2020-2022 - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - **title**, **content** and **category**. **title** refers to the headline of the news article. **content** refers to the article in itself and **category** denotes the overall context of the news article at a high level. The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - **title**, **content** and **category**. **title** refers to the headline of the news article. **content** refers to the article in itself and **category** denotes the overall context of the news article at a high level. This dataset can be used to pre-train large language models (LLMs) and demonstrate NLP downstream tasks like binary/multi-class text classification. The dataset can be used to study the difference in behaviors of language models when there is a shift in data. For e.g., the classic transformers based BERT model was trained before the COVID era. By training a masked language model (MLM) using this dataset, we can try to differentiate the behaviors of the original BERT model vs the newly trained models. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators This dataset was shared by [@timmayer](https://kaggle.com/timmayer) ### Licensing Information The license for this dataset is cc0-1.0 ### Citation Information ```bibtex [More Information Needed] ``` ### Contributions [More Information Needed]
osanseviero/covid_news
[ "license:cc0-1.0", "region:us" ]
2022-09-09T13:52:52+00:00
{"license": ["cc0-1.0"], "converted_from": "kaggle", "kaggle_id": "timmayer/covid-news-articles-2020-2022"}
2022-09-09T13:53:32+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# Dataset Card for COVID News Articles (2020 - 2022) ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - title, content and category. title refers to the headline of the news article. content refers to the article in itself and category denotes the overall context of the news article at a high level. The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - title, content and category. title refers to the headline of the news article. content refers to the article in itself and category denotes the overall context of the news article at a high level. This dataset can be used to pre-train large language models (LLMs) and demonstrate NLP downstream tasks like binary/multi-class text classification. The dataset can be used to study the difference in behaviors of language models when there is a shift in data. For e.g., the classic transformers based BERT model was trained before the COVID era. By training a masked language model (MLM) using this dataset, we can try to differentiate the behaviors of the original BERT model vs the newly trained models. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators This dataset was shared by @timmayer ### Licensing Information The license for this dataset is cc0-1.0 ### Contributions
[ "# Dataset Card for COVID News Articles (2020 - 2022)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - title, content and category. title refers to the headline of the news article. content refers to the article in itself and category denotes the overall context of the news article at a high level. The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - title, content and category. title refers to the headline of the news article. content refers to the article in itself and category denotes the overall context of the news article at a high level. \n\nThis dataset can be used to pre-train large language models (LLMs) and demonstrate NLP downstream tasks like binary/multi-class text classification. The dataset can be used to study the difference in behaviors of language models when there is a shift in data. For e.g., the classic transformers based BERT model was trained before the COVID era. By training a masked language model (MLM) using this dataset, we can try to differentiate the behaviors of the original BERT model vs the newly trained models.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @timmayer", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# Dataset Card for COVID News Articles (2020 - 2022)", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThe dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - title, content and category. title refers to the headline of the news article. content refers to the article in itself and category denotes the overall context of the news article at a high level. The dataset encapsulates approximately half a million news articles collected over a period of 2 years during the Coronavirus pandemic onset and surge. It consists of 3 columns - title, content and category. title refers to the headline of the news article. content refers to the article in itself and category denotes the overall context of the news article at a high level. \n\nThis dataset can be used to pre-train large language models (LLMs) and demonstrate NLP downstream tasks like binary/multi-class text classification. The dataset can be used to study the difference in behaviors of language models when there is a shift in data. For e.g., the classic transformers based BERT model was trained before the COVID era. By training a masked language model (MLM) using this dataset, we can try to differentiate the behaviors of the original BERT model vs the newly trained models.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThis dataset was shared by @timmayer", "### Licensing Information\n\nThe license for this dataset is cc0-1.0", "### Contributions" ]
233147fe574a16a3ef05d3a71163f0b18080f438
# Dataset Card for LibriVox Indonesia 1.0 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia - **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia - **Point of Contact:** [Cahya Wirawan](mailto:[email protected]) ### Dataset Summary The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset. The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it for other languages without additional work to train the model. The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files as we collect them. ### Languages ``` Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese ``` ## Dataset Structure ### Data Instances A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include `reader` and `language`. ```python { 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'language': 'sun', 'reader': '3174', 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa', 'audio': { 'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100 }, } ``` ### Data Fields `path` (`string`): The path to the audio file `language` (`string`): The language of the audio file `reader` (`string`): The reader Id in LibriVox `sentence` (`string`): The sentence the user read from the book. `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`. ### Data Splits The speech material has only train split. ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/) ### Citation Information ``` ```
cahya/librivox-indonesia
[ "task_categories:automatic-speech-recognition", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:multilingual", "size_categories:1K<n<10K", "source_datasets:librivox", "language:ace", "language:ban", "language:bug", "language:id", "language:min", "language:jav", "language:sun", "license:cc", "region:us" ]
2022-09-09T14:21:18+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ace", "ban", "bug", "id", "min", "jav", "sun"], "license": "cc", "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["librivox"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "LibriVox Indonesia 1.0"}
2024-02-01T21:01:52+00:00
[]
[ "ace", "ban", "bug", "id", "min", "jav", "sun" ]
TAGS #task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-librivox #language-Achinese #language-Balinese #language-Buginese #language-Indonesian #language-Minangkabau #language-Javanese #language-Sundanese #license-cc #region-us
# Dataset Card for LibriVox Indonesia 1.0 ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Point of Contact: Cahya Wirawan ### Dataset Summary The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public domain audiobooks LibriVox. We collected only languages in Indonesia for this dataset. The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it for other languages without additional work to train the model. The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files as we collect them. ### Languages ## Dataset Structure ### Data Instances A typical data point comprises the 'path' to the audio file and its 'sentence'. Additional fields include 'reader' and 'language'. ### Data Fields 'path' ('string'): The path to the audio file 'language' ('string'): The language of the audio file 'reader' ('string'): The reader Id in LibriVox 'sentence' ('string'): The sentence the user read from the book. 'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'. ### Data Splits The speech material has only train split. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Public Domain, CC-0
[ "# Dataset Card for LibriVox Indonesia 1.0", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Cahya Wirawan", "### Dataset Summary\nThe LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public \ndomain audiobooks LibriVox. We collected only languages in Indonesia for this dataset. \nThe original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio \nfile in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. \n\nWe converted the audiobooks to speech datasets using the forced alignment software we developed. It supports \nmultilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it \nfor other languages without additional work to train the model.\n\nThe dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files\nas we collect them.", "### Languages", "## Dataset Structure", "### Data Instances\nA typical data point comprises the 'path' to the audio file and its 'sentence'. Additional fields include \n'reader' and 'language'.", "### Data Fields\n'path' ('string'): The path to the audio file\n\n'language' ('string'): The language of the audio file\n\n'reader' ('string'): The reader Id in LibriVox\n\n'sentence' ('string'): The sentence the user read from the book.\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.", "### Data Splits\nThe speech material has only train split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nPublic Domain, CC-0" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-librivox #language-Achinese #language-Balinese #language-Buginese #language-Indonesian #language-Minangkabau #language-Javanese #language-Sundanese #license-cc #region-us \n", "# Dataset Card for LibriVox Indonesia 1.0", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Cahya Wirawan", "### Dataset Summary\nThe LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public \ndomain audiobooks LibriVox. We collected only languages in Indonesia for this dataset. \nThe original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio \nfile in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. \n\nWe converted the audiobooks to speech datasets using the forced alignment software we developed. It supports \nmultilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it \nfor other languages without additional work to train the model.\n\nThe dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files\nas we collect them.", "### Languages", "## Dataset Structure", "### Data Instances\nA typical data point comprises the 'path' to the audio file and its 'sentence'. Additional fields include \n'reader' and 'language'.", "### Data Fields\n'path' ('string'): The path to the audio file\n\n'language' ('string'): The language of the audio file\n\n'reader' ('string'): The reader Id in LibriVox\n\n'sentence' ('string'): The sentence the user read from the book.\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.", "### Data Splits\nThe speech material has only train split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nPublic Domain, CC-0" ]
c7a7286370bdbedb08962e147b3b4c0752c8d2c8
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-a8cade-61
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T15:34:53+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-09T15:35:54+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
2e68efee3e15ad4aee700a9b569fc5c2e3b05a45
# Dataset Card for REBEL-Portuguese ## Table of Contents - [Dataset Card for REBEL-Portuguese](#dataset-card-for-rebel) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [https://github.com/Babelscape/rebel](https://github.com/Babelscape/rebel) - **Paper:** [https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf](https://github.com/Babelscape/rebel/blob/main/docs/EMNLP_2021_REBEL__Camera_Ready_.pdf) - **Point of Contact:** [[email protected]]([email protected]) ### Dataset Summary Dataset adapted to Portuguese from [REBEL-dataset](https://huggingface.co/datasets/Babelscape/rebel-dataset) . ### Supported Tasks and Leaderboards - `text-retrieval-other-relation-extraction`: The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type. ### Languages The dataset is in Portuguese, from the Portuguese Wikipedia. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data Data comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation. #### Initial Data Collection and Normalization For the data collection, the dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/Babelscape/crocodile) insipired by [T-REx Pipeline](https://github.com/hadyelsahar/RE-NLG-Dataset) more details found at: [T-REx Website](https://hadyelsahar.github.io/t-rex/). The starting point is a Wikipedia dump as well as a Wikidata one. After the triplets are extracted, an NLI system was used to filter out those not entailed by the text. #### Who are the source language producers? Any Wikipedia and Wikidata contributor. ### Annotations #### Annotation process The dataset extraction pipeline [cRocoDiLe: Automati**c** **R**elati**o**n Extra**c**ti**o**n **D**ataset w**i**th N**L**I filt**e**ring](https://github.com/ju-resplande/crocodile). #### Who are the annotators? Automatic annottations ### Personal and Sensitive Information All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Not for now ## Additional Information ### Dataset Curators ### Licensing Information ### Citation Information ### Contributions Thanks to [@ju-resplande](https://github.com/ju-resplade) for adding this dataset.
ju-resplande/rebel-pt
[ "task_categories:text-retrieval", "task_categories:text2text-generation", "annotations_creators:machine-generated", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|rebel-dataset", "language:pt", "license:cc-by-nc-sa-4.0", "relation-extraction", "conditional-text-generation", "region:us" ]
2022-09-09T16:09:13+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["pt"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|rebel-dataset"], "task_categories": ["text-retrieval", "text2text-generation"], "task_ids": [], "pretty_name": "rebel-portuguese", "tags": ["relation-extraction", "conditional-text-generation"]}
2022-10-29T11:19:46+00:00
[]
[ "pt" ]
TAGS #task_categories-text-retrieval #task_categories-text2text-generation #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|rebel-dataset #language-Portuguese #license-cc-by-nc-sa-4.0 #relation-extraction #conditional-text-generation #region-us
# Dataset Card for REBEL-Portuguese ## Table of Contents - Dataset Card for REBEL-Portuguese - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Initial Data Collection and Normalization - Who are the source language producers? - Annotations - Annotation process - Who are the annotators? - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: URL - Paper: URL - Point of Contact: julianarsg13@URL ### Dataset Summary Dataset adapted to Portuguese from REBEL-dataset . ### Supported Tasks and Leaderboards - 'text-retrieval-other-relation-extraction': The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type. ### Languages The dataset is in Portuguese, from the Portuguese Wikipedia. ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data Data comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation. #### Initial Data Collection and Normalization For the data collection, the dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering insipired by T-REx Pipeline more details found at: T-REx Website. The starting point is a Wikipedia dump as well as a Wikidata one. After the triplets are extracted, an NLI system was used to filter out those not entailed by the text. #### Who are the source language producers? Any Wikipedia and Wikidata contributor. ### Annotations #### Annotation process The dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering. #### Who are the annotators? Automatic annottations ### Personal and Sensitive Information All text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Not for now ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @ju-resplande for adding this dataset.
[ "# Dataset Card for REBEL-Portuguese", "## Table of Contents\n\n- Dataset Card for REBEL-Portuguese\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: julianarsg13@URL", "### Dataset Summary\n\nDataset adapted to Portuguese from REBEL-dataset .", "### Supported Tasks and Leaderboards\n\n- 'text-retrieval-other-relation-extraction': The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type.", "### Languages\n\nThe dataset is in Portuguese, from the Portuguese Wikipedia.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nData comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation.", "#### Initial Data Collection and Normalization\n\nFor the data collection, the dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering insipired by T-REx Pipeline more details found at: T-REx Website. The starting point is a Wikipedia dump as well as a Wikidata one.\nAfter the triplets are extracted, an NLI system was used to filter out those not entailed by the text.", "#### Who are the source language producers?\n\nAny Wikipedia and Wikidata contributor.", "### Annotations", "#### Annotation process\n\nThe dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering.", "#### Who are the annotators?\n\nAutomatic annottations", "### Personal and Sensitive Information\n\nAll text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nNot for now", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @ju-resplande for adding this dataset." ]
[ "TAGS\n#task_categories-text-retrieval #task_categories-text2text-generation #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|rebel-dataset #language-Portuguese #license-cc-by-nc-sa-4.0 #relation-extraction #conditional-text-generation #region-us \n", "# Dataset Card for REBEL-Portuguese", "## Table of Contents\n\n- Dataset Card for REBEL-Portuguese\n - Table of Contents\n - Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n - Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n - Dataset Creation\n - Curation Rationale\n - Source Data\n - Initial Data Collection and Normalization\n - Who are the source language producers?\n - Annotations\n - Annotation process\n - Who are the annotators?\n - Personal and Sensitive Information\n - Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n - Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Repository: URL\n- Paper: URL\n- Point of Contact: julianarsg13@URL", "### Dataset Summary\n\nDataset adapted to Portuguese from REBEL-dataset .", "### Supported Tasks and Leaderboards\n\n- 'text-retrieval-other-relation-extraction': The dataset can be used to train a model for Relation Extraction, which consists in extracting triplets from raw text, made of subject, object and relation type.", "### Languages\n\nThe dataset is in Portuguese, from the Portuguese Wikipedia.", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nData comes from Wikipedia text before the table of contents, as well as Wikidata for the triplets annotation.", "#### Initial Data Collection and Normalization\n\nFor the data collection, the dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering insipired by T-REx Pipeline more details found at: T-REx Website. The starting point is a Wikipedia dump as well as a Wikidata one.\nAfter the triplets are extracted, an NLI system was used to filter out those not entailed by the text.", "#### Who are the source language producers?\n\nAny Wikipedia and Wikidata contributor.", "### Annotations", "#### Annotation process\n\nThe dataset extraction pipeline cRocoDiLe: Automatic Relation Extraction Dataset with NLI filtering.", "#### Who are the annotators?\n\nAutomatic annottations", "### Personal and Sensitive Information\n\nAll text is from Wikipedia, any Personal or Sensitive Information there may be present in this dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\nNot for now", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @ju-resplande for adding this dataset." ]
c77c333b7fccd5643138b200a02064979a0db135
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-staging-eval-autoevaluate__zero-shot-classification-sample-autoevalu-40d85c-155
[ "autotrain", "evaluation", "region:us" ]
2022-09-09T18:03:43+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-09-22T01:59:16+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
922eca60e4c424a62beca76ab414ddc4dbeb1039
# AutoTrain Dataset for project: donut-vs-croissant ## Dataset Descritpion This dataset has been automatically processed by AutoTrain for project donut-vs-croissant. ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "image": "<512x512 RGB PIL image>", "target": 0 }, { "image": "<512x512 RGB PIL image>", "target": 0 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "image": "Image(decode=True, id=None)", "target": "ClassLabel(num_classes=2, names=['croissant', 'donut'], id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 133 | | valid | 362 |
victor/autotrain-data-donut-vs-croissant
[ "task_categories:image-classification", "region:us" ]
2022-09-09T19:29:58+00:00
{"task_categories": ["image-classification"]}
2022-09-09T19:32:23+00:00
[]
[]
TAGS #task_categories-image-classification #region-us
AutoTrain Dataset for project: donut-vs-croissant ================================================= Dataset Descritpion ------------------- This dataset has been automatically processed by AutoTrain for project donut-vs-croissant. ### Languages The BCP-47 code for the dataset's language is unk. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-image-classification #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
671cdca3749b70e3e3b4f23e36428f1b1890ab70
# Cannabis Tests, Curated by Cannlytics <div style="margin-top:1rem; margin-bottom: 1rem;"> <img width="240px" alt="" src="https://firebasestorage.googleapis.com/v0/b/cannlytics.appspot.com/o/public%2Fimages%2Fdatasets%2Fcannabis_tests%2Fcannabis_tests_curated_by_cannlytics.png?alt=media&token=22e4d1da-6b30-4c3f-9ff7-1954ac2739b2"> </div> ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Data Collection and Normalization](#data-collection-and-normalization) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [License](#license) - [Citation](#citation) - [Contributions](#contributions) ## Dataset Description - **Homepage:** <https://github.com/cannlytics/cannlytics> - **Repository:** <https://huggingface.co/datasets/cannlytics/cannabis_tests> - **Point of Contact:** <[email protected]> ### Dataset Summary This dataset is a collection of public cannabis lab test results parsed by [`CoADoc`](https://github.com/cannlytics/cannlytics/tree/main/cannlytics/data/coas), a certificate of analysis (COA) parsing tool. ## Dataset Structure The dataset is partitioned into the various sources of lab results. | Subset | Source | Observations | |--------|--------|--------------| | `rawgarden` | Raw Gardens | 2,667 | | `mcrlabs` | MCR Labs | Coming soon! | | `psilabs` | PSI Labs | Coming soon! | | `sclabs` | SC Labs | Coming soon! | | `washington` | Washington State | Coming soon! | ### Data Instances You can load the `details` for each of the dataset files. For example: ```py from datasets import load_dataset # Download Raw Garden lab result details. dataset = load_dataset('cannlytics/cannabis_tests', 'rawgarden') details = dataset['details'] assert len(details) > 0 print('Downloaded %i observations.' % len(details)) ``` > Note: Configurations for `results` and `values` are planned. For now, you can create these data with `CoADoc().save(details, out_file)`. ### Data Fields Below is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect encounter in the parsed COA data. | Field | Example| Description | |-------|-----|-------------| | `analyses` | ["cannabinoids"] | A list of analyses performed on a given sample. | | `{analysis}_method` | "HPLC" | The method used for each analysis. | | `{analysis}_status` | "pass" | The pass, fail, or N/A status for pass / fail analyses. | | `coa_urls` | [{"url": "", "filename": ""}] | A list of certificate of analysis (CoA) URLs. | | `date_collected` | 2022-04-20T04:20 | An ISO-formatted time when the sample was collected. | | `date_tested` | 2022-04-20T16:20 | An ISO-formatted time when the sample was tested. | | `date_received` | 2022-04-20T12:20 | An ISO-formatted time when the sample was received. | | `distributor` | "Your Favorite Dispo" | The name of the product distributor, if applicable. | | `distributor_address` | "Under the Bridge, SF, CA 55555" | The distributor address, if applicable. | | `distributor_street` | "Under the Bridge" | The distributor street, if applicable. | | `distributor_city` | "SF" | The distributor city, if applicable. | | `distributor_state` | "CA" | The distributor state, if applicable. | | `distributor_zipcode` | "55555" | The distributor zip code, if applicable. | | `distributor_license_number` | "L2Stat" | The distributor license number, if applicable. | | `images` | [{"url": "", "filename": ""}] | A list of image URLs for the sample. | | `lab_results_url` | "https://cannlytics.com/results" | A URL to the sample results online. | | `producer` | "Grow Tent" | The producer of the sampled product. | | `producer_address` | "3rd & Army, SF, CA 55555" | The producer's address. | | `producer_street` | "3rd & Army" | The producer's street. | | `producer_city` | "SF" | The producer's city. | | `producer_state` | "CA" | The producer's state. | | `producer_zipcode` | "55555" | The producer's zipcode. | | `producer_license_number` | "L2Calc" | The producer's license number. | | `product_name` | "Blue Rhino Pre-Roll" | The name of the product. | | `lab_id` | "Sample-0001" | A lab-specific ID for the sample. | | `product_type` | "flower" | The type of product. | | `batch_number` | "Order-0001" | A batch number for the sample or product. | | `metrc_ids` | ["1A4060300002199000003445"] | A list of relevant Metrc IDs. | | `metrc_lab_id` | "1A4060300002199000003445" | The Metrc ID associated with the lab sample. | | `metrc_source_id` | "1A4060300002199000003445" | The Metrc ID associated with the sampled product. | | `product_size` | 2000 | The size of the product in milligrams. | | `serving_size` | 1000 | An estimated serving size in milligrams. | | `servings_per_package` | 2 | The number of servings per package. | | `sample_weight` | 1 | The weight of the product sample in grams. | | `results` | [{...},...] | A list of results, see below for result-specific fields. | | `status` | "pass" | The overall pass / fail status for all contaminant screening analyses. | | `total_cannabinoids` | 14.20 | The analytical total of all cannabinoids measured. | | `total_thc` | 14.00 | The analytical total of THC and THCA. | | `total_cbd` | 0.20 | The analytical total of CBD and CBDA. | | `total_terpenes` | 0.42 | The sum of all terpenes measured. | | `results_hash` | "{sha256-hash}" | An HMAC of the sample's `results` JSON signed with Cannlytics' public key, `"cannlytics.eth"`. | | `sample_id` | "{sha256-hash}" | A generated ID to uniquely identify the `producer`, `product_name`, and `results`. | | `sample_hash` | "{sha256-hash}" | An HMAC of the entire sample JSON signed with Cannlytics' public key, `"cannlytics.eth"`. | <!-- | `strain_name` | "Blue Rhino" | A strain name, if specified. Otherwise, can be attempted to be parsed from the `product_name`. | --> Each result can contain the following fields. | Field | Example| Description | |-------|--------|-------------| | `analysis` | "pesticides" | The analysis used to obtain the result. | | `key` | "pyrethrins" | A standardized key for the result analyte. | | `name` | "Pyrethrins" | The lab's internal name for the result analyte | | `value` | 0.42 | The value of the result. | | `mg_g` | 0.00000042 | The value of the result in milligrams per gram. | | `units` | "ug/g" | The units for the result `value`, `limit`, `lod`, and `loq`. | | `limit` | 0.5 | A pass / fail threshold for contaminant screening analyses. | | `lod` | 0.01 | The limit of detection for the result analyte. Values below the `lod` are typically reported as `ND`. | | `loq` | 0.1 | The limit of quantification for the result analyte. Values above the `lod` but below the `loq` are typically reported as `<LOQ`. | | `status` | "pass" | The pass / fail status for contaminant screening analyses. | ### Data Splits The data is split into `details`, `results`, and `values` data. Configurations for `results` and `values` are planned. For now, you can create these data with: ```py from cannlytics.data.coas import CoADoc from datasets import load_dataset import pandas as pd # Download Raw Garden lab result details. repo = 'cannlytics/cannabis_tests' dataset = load_dataset(repo, 'rawgarden') details = dataset['details'] # Save the data locally with "Details", "Results", and "Values" worksheets. outfile = 'details.xlsx' parser = CoADoc() parser.save(details.to_pandas(), outfile) # Read the values. values = pd.read_excel(outfile, sheet_name='Values') # Read the results. results = pd.read_excel(outfile, sheet_name='Results') ``` <!-- Training data is used for training your models. Validation data is used for evaluating your trained models, to help you determine a final model. Test data is used to evaluate your final model. --> ## Dataset Creation ### Curation Rationale Certificates of analysis (CoAs) are abundant for cannabis cultivators, processors, retailers, and consumers too, but the data is often locked away. Rich, valuable laboratory data so close, yet so far away! CoADoc puts these vital data points in your hands by parsing PDFs and URLs, finding all the data, standardizing the data, and cleanly returning the data to you. ### Source Data | Data Source | URL | |-------------|-----| | MCR Labs Test Results | <https://reports.mcrlabs.com> | | PSI Labs Test Results | <https://results.psilabs.org/test-results/> | | Raw Garden Test Results | <https://rawgarden.farm/lab-results/> | | SC Labs Test Results | <https://client.sclabs.com/> | | Washington State Lab Test Results | <https://lcb.app.box.com/s/e89t59s0yb558tjoncjsid710oirqbgd> | #### Data Collection and Normalization You can recreate the dataset using the open source algorithms in the repository. First clone the repository: ``` git clone https://huggingface.co/datasets/cannlytics/cannabis_tests ``` You can then install the algorithm Python (3.9+) requirements: ``` cd cannabis_tests pip install -r requirements.txt ``` Then you can run all of the data-collection algorithms: ``` python algorithms/main.py ``` Or you can run each algorithm individually. For example: ``` python algorithms/get_results_mcrlabs.py ``` In the `algorithms` directory, you can find the data collection scripts described in the table below. | Algorithm | Organization | Description | |-----------|---------------|-------------| | `get_results_mcrlabs.py` | MCR Labs | Get lab results published by MCR Labs. | | `get_results_psilabs.py` | PSI Labs | Get historic lab results published by MCR Labs. | | `get_results_rawgarden.py` | Raw Garden | Get lab results Raw Garden publishes for their products. | | `get_results_sclabs.py` | SC Labs | Get lab results published by SC Labs. | | `get_results_washington.py` | Washington State | Get historic lab results obtained through a FOIA request in Washington State. | ### Personal and Sensitive Information The dataset includes public addresses and contact information for related cannabis licensees. It is important to take care to use these data points in a legal manner. ## Considerations for Using the Data ### Social Impact of Dataset Arguably, there is substantial social impact that could result from the study of cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset. ### Discussion of Biases Cannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration. ### Other Known Limitations The data represents only a subset of the population of cannabis lab results. Non-standard values are coded as follows. | Actual | Coding | |--------|--------| | `'ND'` | `0.000000001` | | `'No detection in 1 gram'` | `0.000000001` | | `'Negative/1g'` | `0.000000001` | | '`PASS'` | `0.000000001` | | `'<LOD'` | `0.00000001` | | `'< LOD'` | `0.00000001` | | `'<LOQ'` | `0.0000001` | | `'< LOQ'` | `0.0000001` | | `'<LLOQ'` | `0.0000001` | | `'≥ LOD'` | `10001` | | `'NR'` | `None` | | `'N/A'` | `None` | | `'na'` | `None` | | `'NT'` | `None` | ## Additional Information ### Dataset Curators Curated by [🔥Cannlytics](https://cannlytics.com)<br> <[email protected]> ### License ``` Copyright (c) 2022 Cannlytics and the Cannabis Data Science Team The files associated with this dataset are licensed under a Creative Commons Attribution 4.0 International license. You can share, copy and modify this dataset so long as you give appropriate credit, provide a link to the CC BY license, and indicate if changes were made, but you may not do so in a way that suggests the rights holder has endorsed you or your use of the dataset. Note that further permission may be required for any content within the dataset that is identified as belonging to a third party. ``` ### Citation Please cite the following if you use the code examples in your research: ```bibtex @misc{cannlytics2022, title={Cannabis Data Science}, author={Skeate, Keegan and O'Sullivan-Sutherland, Candace}, journal={https://github.com/cannlytics/cannabis-data-science}, year={2022} } ``` ### Contributions Thanks to [🔥Cannlytics](https://cannlytics.com), [@candy-o](https://github.com/candy-o), [@hcadeaux](https://huggingface.co/hcadeaux), [@keeganskeate](https://github.com/keeganskeate), [The CESC](https://thecesc.org), and the entire [Cannabis Data Science Team](https://meetup.com/cannabis-data-science/members) for their contributions.
cannlytics/cannabis_tests
[ "annotations_creators:expert-generated", "language_creators:expert-generated", "size_categories:1K<n<10K", "source_datasets:original", "license:cc-by-4.0", "cannabis", "lab results", "tests", "region:us" ]
2022-09-10T15:54:44+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "license": ["cc-by-4.0"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "pretty_name": "cannabis_tests", "tags": ["cannabis", "lab results", "tests"]}
2023-02-22T15:48:43+00:00
[]
[]
TAGS #annotations_creators-expert-generated #language_creators-expert-generated #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #cannabis #lab results #tests #region-us
Cannabis Tests, Curated by Cannlytics ===================================== ![](URL </div> <h2>Table of Contents</h2> <ul> <li>Table of Contents</li> <li>Dataset Description <ul> <li>Dataset Summary</li> </ul> </li> <li>Dataset Structure <ul> <li>Data Instances</li> <li>Data Fields</li> <li>Data Splits</li> </ul> </li> <li>Dataset Creation <ul> <li>Curation Rationale</li> <li>Source Data</li> <li>Data Collection and Normalization</li> <li>Personal and Sensitive Information</li> </ul> </li> <li>Considerations for Using the Data <ul> <li>Social Impact of Dataset</li> <li>Discussion of Biases</li> <li>Other Known Limitations</li> </ul> </li> <li>Additional Information <ul> <li>Dataset Curators</li> <li>License</li> <li>Citation</li> <li>Contributions</li> </ul> </li> </ul> <h2>Dataset Description</h2> <ul> <li>Homepage: <URL</li> <li>Repository: <URL</li> <li>Point of Contact: <a href=)dev@URL ### Dataset Summary This dataset is a collection of public cannabis lab test results parsed by 'CoADoc', a certificate of analysis (COA) parsing tool. Dataset Structure ----------------- The dataset is partitioned into the various sources of lab results. Subset: 'rawgarden', Source: Raw Gardens, Observations: 2,667 Subset: 'mcrlabs', Source: MCR Labs, Observations: Coming soon! Subset: 'psilabs', Source: PSI Labs, Observations: Coming soon! Subset: 'sclabs', Source: SC Labs, Observations: Coming soon! Subset: 'washington', Source: Washington State, Observations: Coming soon! ### Data Instances You can load the 'details' for each of the dataset files. For example: > > Note: Configurations for 'results' and 'values' are planned. For now, you can create these data with 'CoADoc().save(details, out\_file)'. > > > ### Data Fields Below is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect encounter in the parsed COA data. Field: 'analyses', Example: ["cannabinoids"], Description: A list of analyses performed on a given sample. Field: '{analysis}\_method', Example: "HPLC", Description: The method used for each analysis. Field: '{analysis}\_status', Example: "pass", Description: The pass, fail, or N/A status for pass / fail analyses. Field: 'coa\_urls', Example: [{"url": "", "filename": ""}], Description: A list of certificate of analysis (CoA) URLs. Field: 'date\_collected', Example: 2022-04-20T04:20, Description: An ISO-formatted time when the sample was collected. Field: 'date\_tested', Example: 2022-04-20T16:20, Description: An ISO-formatted time when the sample was tested. Field: 'date\_received', Example: 2022-04-20T12:20, Description: An ISO-formatted time when the sample was received. Field: 'distributor', Example: "Your Favorite Dispo", Description: The name of the product distributor, if applicable. Field: 'distributor\_address', Example: "Under the Bridge, SF, CA 55555", Description: The distributor address, if applicable. Field: 'distributor\_street', Example: "Under the Bridge", Description: The distributor street, if applicable. Field: 'distributor\_city', Example: "SF", Description: The distributor city, if applicable. Field: 'distributor\_state', Example: "CA", Description: The distributor state, if applicable. Field: 'distributor\_zipcode', Example: "55555", Description: The distributor zip code, if applicable. Field: 'distributor\_license\_number', Example: "L2Stat", Description: The distributor license number, if applicable. Field: 'images', Example: [{"url": "", "filename": ""}], Description: A list of image URLs for the sample. Field: 'lab\_results\_url', Example: "URL, Description: A URL to the sample results online. Field: 'producer', Example: "Grow Tent", Description: The producer of the sampled product. Field: 'producer\_address', Example: "3rd & Army, SF, CA 55555", Description: The producer's address. Field: 'producer\_street', Example: "3rd & Army", Description: The producer's street. Field: 'producer\_city', Example: "SF", Description: The producer's city. Field: 'producer\_state', Example: "CA", Description: The producer's state. Field: 'producer\_zipcode', Example: "55555", Description: The producer's zipcode. Field: 'producer\_license\_number', Example: "L2Calc", Description: The producer's license number. Field: 'product\_name', Example: "Blue Rhino Pre-Roll", Description: The name of the product. Field: 'lab\_id', Example: "Sample-0001", Description: A lab-specific ID for the sample. Field: 'product\_type', Example: "flower", Description: The type of product. Field: 'batch\_number', Example: "Order-0001", Description: A batch number for the sample or product. Field: 'metrc\_ids', Example: ["1A4060300002199000003445"], Description: A list of relevant Metrc IDs. Field: 'metrc\_lab\_id', Example: "1A4060300002199000003445", Description: The Metrc ID associated with the lab sample. Field: 'metrc\_source\_id', Example: "1A4060300002199000003445", Description: The Metrc ID associated with the sampled product. Field: 'product\_size', Example: 2000, Description: The size of the product in milligrams. Field: 'serving\_size', Example: 1000, Description: An estimated serving size in milligrams. Field: 'servings\_per\_package', Example: 2, Description: The number of servings per package. Field: 'sample\_weight', Example: 1, Description: The weight of the product sample in grams. Field: 'results', Example: [{...},...], Description: A list of results, see below for result-specific fields. Field: 'status', Example: "pass", Description: The overall pass / fail status for all contaminant screening analyses. Field: 'total\_cannabinoids', Example: 14.20, Description: The analytical total of all cannabinoids measured. Field: 'total\_thc', Example: 14.00, Description: The analytical total of THC and THCA. Field: 'total\_cbd', Example: 0.20, Description: The analytical total of CBD and CBDA. Field: 'total\_terpenes', Example: 0.42, Description: The sum of all terpenes measured. Field: 'results\_hash', Example: "{sha256-hash}", Description: An HMAC of the sample's 'results' JSON signed with Cannlytics' public key, '"URL"'. Field: 'sample\_id', Example: "{sha256-hash}", Description: A generated ID to uniquely identify the 'producer', 'product\_name', and 'results'. Field: 'sample\_hash', Example: "{sha256-hash}", Description: An HMAC of the entire sample JSON signed with Cannlytics' public key, '"URL"'. Each result can contain the following fields. Field: 'analysis', Example: "pesticides", Description: The analysis used to obtain the result. Field: 'key', Example: "pyrethrins", Description: A standardized key for the result analyte. Field: 'name', Example: "Pyrethrins", Description: The lab's internal name for the result analyte Field: 'value', Example: 0.42, Description: The value of the result. Field: 'mg\_g', Example: 0.00000042, Description: The value of the result in milligrams per gram. Field: 'units', Example: "ug/g", Description: The units for the result 'value', 'limit', 'lod', and 'loq'. Field: 'limit', Example: 0.5, Description: A pass / fail threshold for contaminant screening analyses. Field: 'lod', Example: 0.01, Description: The limit of detection for the result analyte. Values below the 'lod' are typically reported as 'ND'. Field: 'loq', Example: 0.1, Description: The limit of quantification for the result analyte. Values above the 'lod' but below the 'loq' are typically reported as '<LOQ'. Field: 'status', Example: "pass", Description: The pass / fail status for contaminant screening analyses. ### Data Splits The data is split into 'details', 'results', and 'values' data. Configurations for 'results' and 'values' are planned. For now, you can create these data with: Dataset Creation ---------------- ### Curation Rationale Certificates of analysis (CoAs) are abundant for cannabis cultivators, processors, retailers, and consumers too, but the data is often locked away. Rich, valuable laboratory data so close, yet so far away! CoADoc puts these vital data points in your hands by parsing PDFs and URLs, finding all the data, standardizing the data, and cleanly returning the data to you. ### Source Data #### Data Collection and Normalization You can recreate the dataset using the open source algorithms in the repository. First clone the repository: You can then install the algorithm Python (3.9+) requirements: Then you can run all of the data-collection algorithms: Or you can run each algorithm individually. For example: In the 'algorithms' directory, you can find the data collection scripts described in the table below. Algorithm: 'get\_results\_mcrlabs.py', Organization: MCR Labs, Description: Get lab results published by MCR Labs. Algorithm: 'get\_results\_psilabs.py', Organization: PSI Labs, Description: Get historic lab results published by MCR Labs. Algorithm: 'get\_results\_rawgarden.py', Organization: Raw Garden, Description: Get lab results Raw Garden publishes for their products. Algorithm: 'get\_results\_sclabs.py', Organization: SC Labs, Description: Get lab results published by SC Labs. Algorithm: 'get\_results\_washington.py', Organization: Washington State, Description: Get historic lab results obtained through a FOIA request in Washington State. ### Personal and Sensitive Information The dataset includes public addresses and contact information for related cannabis licensees. It is important to take care to use these data points in a legal manner. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset Arguably, there is substantial social impact that could result from the study of cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset. ### Discussion of Biases Cannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration. ### Other Known Limitations The data represents only a subset of the population of cannabis lab results. Non-standard values are coded as follows. Additional Information ---------------------- ### Dataset Curators Curated by Cannlytics [dev@URL](mailto:dev@URL) ### License Please cite the following if you use the code examples in your research: ### Contributions Thanks to Cannlytics, @candy-o, @hcadeaux, @keeganskeate, The CESC, and the entire Cannabis Data Science Team for their contributions.
[ "### Dataset Summary\n\n\nThis dataset is a collection of public cannabis lab test results parsed by 'CoADoc', a certificate of analysis (COA) parsing tool.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset is partitioned into the various sources of lab results.\n\n\nSubset: 'rawgarden', Source: Raw Gardens, Observations: 2,667\nSubset: 'mcrlabs', Source: MCR Labs, Observations: Coming soon!\nSubset: 'psilabs', Source: PSI Labs, Observations: Coming soon!\nSubset: 'sclabs', Source: SC Labs, Observations: Coming soon!\nSubset: 'washington', Source: Washington State, Observations: Coming soon!", "### Data Instances\n\n\nYou can load the 'details' for each of the dataset files. For example:\n\n\n\n> \n> Note: Configurations for 'results' and 'values' are planned. For now, you can create these data with 'CoADoc().save(details, out\\_file)'.\n> \n> \n>", "### Data Fields\n\n\nBelow is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect encounter in the parsed COA data.\n\n\nField: 'analyses', Example: [\"cannabinoids\"], Description: A list of analyses performed on a given sample.\nField: '{analysis}\\_method', Example: \"HPLC\", Description: The method used for each analysis.\nField: '{analysis}\\_status', Example: \"pass\", Description: The pass, fail, or N/A status for pass / fail analyses.\nField: 'coa\\_urls', Example: [{\"url\": \"\", \"filename\": \"\"}], Description: A list of certificate of analysis (CoA) URLs.\nField: 'date\\_collected', Example: 2022-04-20T04:20, Description: An ISO-formatted time when the sample was collected.\nField: 'date\\_tested', Example: 2022-04-20T16:20, Description: An ISO-formatted time when the sample was tested.\nField: 'date\\_received', Example: 2022-04-20T12:20, Description: An ISO-formatted time when the sample was received.\nField: 'distributor', Example: \"Your Favorite Dispo\", Description: The name of the product distributor, if applicable.\nField: 'distributor\\_address', Example: \"Under the Bridge, SF, CA 55555\", Description: The distributor address, if applicable.\nField: 'distributor\\_street', Example: \"Under the Bridge\", Description: The distributor street, if applicable.\nField: 'distributor\\_city', Example: \"SF\", Description: The distributor city, if applicable.\nField: 'distributor\\_state', Example: \"CA\", Description: The distributor state, if applicable.\nField: 'distributor\\_zipcode', Example: \"55555\", Description: The distributor zip code, if applicable.\nField: 'distributor\\_license\\_number', Example: \"L2Stat\", Description: The distributor license number, if applicable.\nField: 'images', Example: [{\"url\": \"\", \"filename\": \"\"}], Description: A list of image URLs for the sample.\nField: 'lab\\_results\\_url', Example: \"URL, Description: A URL to the sample results online.\nField: 'producer', Example: \"Grow Tent\", Description: The producer of the sampled product.\nField: 'producer\\_address', Example: \"3rd & Army, SF, CA 55555\", Description: The producer's address.\nField: 'producer\\_street', Example: \"3rd & Army\", Description: The producer's street.\nField: 'producer\\_city', Example: \"SF\", Description: The producer's city.\nField: 'producer\\_state', Example: \"CA\", Description: The producer's state.\nField: 'producer\\_zipcode', Example: \"55555\", Description: The producer's zipcode.\nField: 'producer\\_license\\_number', Example: \"L2Calc\", Description: The producer's license number.\nField: 'product\\_name', Example: \"Blue Rhino Pre-Roll\", Description: The name of the product.\nField: 'lab\\_id', Example: \"Sample-0001\", Description: A lab-specific ID for the sample.\nField: 'product\\_type', Example: \"flower\", Description: The type of product.\nField: 'batch\\_number', Example: \"Order-0001\", Description: A batch number for the sample or product.\nField: 'metrc\\_ids', Example: [\"1A4060300002199000003445\"], Description: A list of relevant Metrc IDs.\nField: 'metrc\\_lab\\_id', Example: \"1A4060300002199000003445\", Description: The Metrc ID associated with the lab sample.\nField: 'metrc\\_source\\_id', Example: \"1A4060300002199000003445\", Description: The Metrc ID associated with the sampled product.\nField: 'product\\_size', Example: 2000, Description: The size of the product in milligrams.\nField: 'serving\\_size', Example: 1000, Description: An estimated serving size in milligrams.\nField: 'servings\\_per\\_package', Example: 2, Description: The number of servings per package.\nField: 'sample\\_weight', Example: 1, Description: The weight of the product sample in grams.\nField: 'results', Example: [{...},...], Description: A list of results, see below for result-specific fields.\nField: 'status', Example: \"pass\", Description: The overall pass / fail status for all contaminant screening analyses.\nField: 'total\\_cannabinoids', Example: 14.20, Description: The analytical total of all cannabinoids measured.\nField: 'total\\_thc', Example: 14.00, Description: The analytical total of THC and THCA.\nField: 'total\\_cbd', Example: 0.20, Description: The analytical total of CBD and CBDA.\nField: 'total\\_terpenes', Example: 0.42, Description: The sum of all terpenes measured.\nField: 'results\\_hash', Example: \"{sha256-hash}\", Description: An HMAC of the sample's 'results' JSON signed with Cannlytics' public key, '\"URL\"'.\nField: 'sample\\_id', Example: \"{sha256-hash}\", Description: A generated ID to uniquely identify the 'producer', 'product\\_name', and 'results'.\nField: 'sample\\_hash', Example: \"{sha256-hash}\", Description: An HMAC of the entire sample JSON signed with Cannlytics' public key, '\"URL\"'.\n\n\nEach result can contain the following fields.\n\n\nField: 'analysis', Example: \"pesticides\", Description: The analysis used to obtain the result.\nField: 'key', Example: \"pyrethrins\", Description: A standardized key for the result analyte.\nField: 'name', Example: \"Pyrethrins\", Description: The lab's internal name for the result analyte\nField: 'value', Example: 0.42, Description: The value of the result.\nField: 'mg\\_g', Example: 0.00000042, Description: The value of the result in milligrams per gram.\nField: 'units', Example: \"ug/g\", Description: The units for the result 'value', 'limit', 'lod', and 'loq'.\nField: 'limit', Example: 0.5, Description: A pass / fail threshold for contaminant screening analyses.\nField: 'lod', Example: 0.01, Description: The limit of detection for the result analyte. Values below the 'lod' are typically reported as 'ND'.\nField: 'loq', Example: 0.1, Description: The limit of quantification for the result analyte. Values above the 'lod' but below the 'loq' are typically reported as '<LOQ'.\nField: 'status', Example: \"pass\", Description: The pass / fail status for contaminant screening analyses.", "### Data Splits\n\n\nThe data is split into 'details', 'results', and 'values' data. Configurations for 'results' and 'values' are planned. For now, you can create these data with:\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCertificates of analysis (CoAs) are abundant for cannabis cultivators, processors, retailers, and consumers too, but the data is often locked away. Rich, valuable laboratory data so close, yet so far away! CoADoc puts these vital data points in your hands by parsing PDFs and URLs, finding all the data, standardizing the data, and cleanly returning the data to you.", "### Source Data", "#### Data Collection and Normalization\n\n\nYou can recreate the dataset using the open source algorithms in the repository. First clone the repository:\n\n\nYou can then install the algorithm Python (3.9+) requirements:\n\n\nThen you can run all of the data-collection algorithms:\n\n\nOr you can run each algorithm individually. For example:\n\n\nIn the 'algorithms' directory, you can find the data collection scripts described in the table below.\n\n\nAlgorithm: 'get\\_results\\_mcrlabs.py', Organization: MCR Labs, Description: Get lab results published by MCR Labs.\nAlgorithm: 'get\\_results\\_psilabs.py', Organization: PSI Labs, Description: Get historic lab results published by MCR Labs.\nAlgorithm: 'get\\_results\\_rawgarden.py', Organization: Raw Garden, Description: Get lab results Raw Garden publishes for their products.\nAlgorithm: 'get\\_results\\_sclabs.py', Organization: SC Labs, Description: Get lab results published by SC Labs.\nAlgorithm: 'get\\_results\\_washington.py', Organization: Washington State, Description: Get historic lab results obtained through a FOIA request in Washington State.", "### Personal and Sensitive Information\n\n\nThe dataset includes public addresses and contact information for related cannabis licensees. It is important to take care to use these data points in a legal manner.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nArguably, there is substantial social impact that could result from the study of cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset.", "### Discussion of Biases\n\n\nCannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration.", "### Other Known Limitations\n\n\nThe data represents only a subset of the population of cannabis lab results. Non-standard values are coded as follows.\n\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nCurated by Cannlytics \n\n[dev@URL](mailto:dev@URL)", "### License\n\n\nPlease cite the following if you use the code examples in your research:", "### Contributions\n\n\nThanks to Cannlytics, @candy-o, @hcadeaux, @keeganskeate, The CESC, and the entire Cannabis Data Science Team for their contributions." ]
[ "TAGS\n#annotations_creators-expert-generated #language_creators-expert-generated #size_categories-1K<n<10K #source_datasets-original #license-cc-by-4.0 #cannabis #lab results #tests #region-us \n", "### Dataset Summary\n\n\nThis dataset is a collection of public cannabis lab test results parsed by 'CoADoc', a certificate of analysis (COA) parsing tool.\n\n\nDataset Structure\n-----------------\n\n\nThe dataset is partitioned into the various sources of lab results.\n\n\nSubset: 'rawgarden', Source: Raw Gardens, Observations: 2,667\nSubset: 'mcrlabs', Source: MCR Labs, Observations: Coming soon!\nSubset: 'psilabs', Source: PSI Labs, Observations: Coming soon!\nSubset: 'sclabs', Source: SC Labs, Observations: Coming soon!\nSubset: 'washington', Source: Washington State, Observations: Coming soon!", "### Data Instances\n\n\nYou can load the 'details' for each of the dataset files. For example:\n\n\n\n> \n> Note: Configurations for 'results' and 'values' are planned. For now, you can create these data with 'CoADoc().save(details, out\\_file)'.\n> \n> \n>", "### Data Fields\n\n\nBelow is a non-exhaustive list of fields, used to standardize the various data that are encountered, that you may expect encounter in the parsed COA data.\n\n\nField: 'analyses', Example: [\"cannabinoids\"], Description: A list of analyses performed on a given sample.\nField: '{analysis}\\_method', Example: \"HPLC\", Description: The method used for each analysis.\nField: '{analysis}\\_status', Example: \"pass\", Description: The pass, fail, or N/A status for pass / fail analyses.\nField: 'coa\\_urls', Example: [{\"url\": \"\", \"filename\": \"\"}], Description: A list of certificate of analysis (CoA) URLs.\nField: 'date\\_collected', Example: 2022-04-20T04:20, Description: An ISO-formatted time when the sample was collected.\nField: 'date\\_tested', Example: 2022-04-20T16:20, Description: An ISO-formatted time when the sample was tested.\nField: 'date\\_received', Example: 2022-04-20T12:20, Description: An ISO-formatted time when the sample was received.\nField: 'distributor', Example: \"Your Favorite Dispo\", Description: The name of the product distributor, if applicable.\nField: 'distributor\\_address', Example: \"Under the Bridge, SF, CA 55555\", Description: The distributor address, if applicable.\nField: 'distributor\\_street', Example: \"Under the Bridge\", Description: The distributor street, if applicable.\nField: 'distributor\\_city', Example: \"SF\", Description: The distributor city, if applicable.\nField: 'distributor\\_state', Example: \"CA\", Description: The distributor state, if applicable.\nField: 'distributor\\_zipcode', Example: \"55555\", Description: The distributor zip code, if applicable.\nField: 'distributor\\_license\\_number', Example: \"L2Stat\", Description: The distributor license number, if applicable.\nField: 'images', Example: [{\"url\": \"\", \"filename\": \"\"}], Description: A list of image URLs for the sample.\nField: 'lab\\_results\\_url', Example: \"URL, Description: A URL to the sample results online.\nField: 'producer', Example: \"Grow Tent\", Description: The producer of the sampled product.\nField: 'producer\\_address', Example: \"3rd & Army, SF, CA 55555\", Description: The producer's address.\nField: 'producer\\_street', Example: \"3rd & Army\", Description: The producer's street.\nField: 'producer\\_city', Example: \"SF\", Description: The producer's city.\nField: 'producer\\_state', Example: \"CA\", Description: The producer's state.\nField: 'producer\\_zipcode', Example: \"55555\", Description: The producer's zipcode.\nField: 'producer\\_license\\_number', Example: \"L2Calc\", Description: The producer's license number.\nField: 'product\\_name', Example: \"Blue Rhino Pre-Roll\", Description: The name of the product.\nField: 'lab\\_id', Example: \"Sample-0001\", Description: A lab-specific ID for the sample.\nField: 'product\\_type', Example: \"flower\", Description: The type of product.\nField: 'batch\\_number', Example: \"Order-0001\", Description: A batch number for the sample or product.\nField: 'metrc\\_ids', Example: [\"1A4060300002199000003445\"], Description: A list of relevant Metrc IDs.\nField: 'metrc\\_lab\\_id', Example: \"1A4060300002199000003445\", Description: The Metrc ID associated with the lab sample.\nField: 'metrc\\_source\\_id', Example: \"1A4060300002199000003445\", Description: The Metrc ID associated with the sampled product.\nField: 'product\\_size', Example: 2000, Description: The size of the product in milligrams.\nField: 'serving\\_size', Example: 1000, Description: An estimated serving size in milligrams.\nField: 'servings\\_per\\_package', Example: 2, Description: The number of servings per package.\nField: 'sample\\_weight', Example: 1, Description: The weight of the product sample in grams.\nField: 'results', Example: [{...},...], Description: A list of results, see below for result-specific fields.\nField: 'status', Example: \"pass\", Description: The overall pass / fail status for all contaminant screening analyses.\nField: 'total\\_cannabinoids', Example: 14.20, Description: The analytical total of all cannabinoids measured.\nField: 'total\\_thc', Example: 14.00, Description: The analytical total of THC and THCA.\nField: 'total\\_cbd', Example: 0.20, Description: The analytical total of CBD and CBDA.\nField: 'total\\_terpenes', Example: 0.42, Description: The sum of all terpenes measured.\nField: 'results\\_hash', Example: \"{sha256-hash}\", Description: An HMAC of the sample's 'results' JSON signed with Cannlytics' public key, '\"URL\"'.\nField: 'sample\\_id', Example: \"{sha256-hash}\", Description: A generated ID to uniquely identify the 'producer', 'product\\_name', and 'results'.\nField: 'sample\\_hash', Example: \"{sha256-hash}\", Description: An HMAC of the entire sample JSON signed with Cannlytics' public key, '\"URL\"'.\n\n\nEach result can contain the following fields.\n\n\nField: 'analysis', Example: \"pesticides\", Description: The analysis used to obtain the result.\nField: 'key', Example: \"pyrethrins\", Description: A standardized key for the result analyte.\nField: 'name', Example: \"Pyrethrins\", Description: The lab's internal name for the result analyte\nField: 'value', Example: 0.42, Description: The value of the result.\nField: 'mg\\_g', Example: 0.00000042, Description: The value of the result in milligrams per gram.\nField: 'units', Example: \"ug/g\", Description: The units for the result 'value', 'limit', 'lod', and 'loq'.\nField: 'limit', Example: 0.5, Description: A pass / fail threshold for contaminant screening analyses.\nField: 'lod', Example: 0.01, Description: The limit of detection for the result analyte. Values below the 'lod' are typically reported as 'ND'.\nField: 'loq', Example: 0.1, Description: The limit of quantification for the result analyte. Values above the 'lod' but below the 'loq' are typically reported as '<LOQ'.\nField: 'status', Example: \"pass\", Description: The pass / fail status for contaminant screening analyses.", "### Data Splits\n\n\nThe data is split into 'details', 'results', and 'values' data. Configurations for 'results' and 'values' are planned. For now, you can create these data with:\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nCertificates of analysis (CoAs) are abundant for cannabis cultivators, processors, retailers, and consumers too, but the data is often locked away. Rich, valuable laboratory data so close, yet so far away! CoADoc puts these vital data points in your hands by parsing PDFs and URLs, finding all the data, standardizing the data, and cleanly returning the data to you.", "### Source Data", "#### Data Collection and Normalization\n\n\nYou can recreate the dataset using the open source algorithms in the repository. First clone the repository:\n\n\nYou can then install the algorithm Python (3.9+) requirements:\n\n\nThen you can run all of the data-collection algorithms:\n\n\nOr you can run each algorithm individually. For example:\n\n\nIn the 'algorithms' directory, you can find the data collection scripts described in the table below.\n\n\nAlgorithm: 'get\\_results\\_mcrlabs.py', Organization: MCR Labs, Description: Get lab results published by MCR Labs.\nAlgorithm: 'get\\_results\\_psilabs.py', Organization: PSI Labs, Description: Get historic lab results published by MCR Labs.\nAlgorithm: 'get\\_results\\_rawgarden.py', Organization: Raw Garden, Description: Get lab results Raw Garden publishes for their products.\nAlgorithm: 'get\\_results\\_sclabs.py', Organization: SC Labs, Description: Get lab results published by SC Labs.\nAlgorithm: 'get\\_results\\_washington.py', Organization: Washington State, Description: Get historic lab results obtained through a FOIA request in Washington State.", "### Personal and Sensitive Information\n\n\nThe dataset includes public addresses and contact information for related cannabis licensees. It is important to take care to use these data points in a legal manner.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nArguably, there is substantial social impact that could result from the study of cannabis, therefore, researchers and data consumers alike should take the utmost care in the use of this dataset.", "### Discussion of Biases\n\n\nCannlytics is a for-profit data and analytics company that primarily serves cannabis businesses. The data are not randomly collected and thus sampling bias should be taken into consideration.", "### Other Known Limitations\n\n\nThe data represents only a subset of the population of cannabis lab results. Non-standard values are coded as follows.\n\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nCurated by Cannlytics \n\n[dev@URL](mailto:dev@URL)", "### License\n\n\nPlease cite the following if you use the code examples in your research:", "### Contributions\n\n\nThanks to Cannlytics, @candy-o, @hcadeaux, @keeganskeate, The CESC, and the entire Cannabis Data Science Team for their contributions." ]
2c6c46871b025d47c494f0cfc2235dcf2cadc1fd
# Dataset Card for Clinical Trials's Reason to Stop ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.opentargets.org - **Repository:** https://github.com/LesyaR/stopReasons - **Paper:** - **Point of Contact:** [email protected] ### Dataset Summary This dataset contains a curated classification of more than 5000 reasons why a clinical trial has suffered an early stop. The text has been extracted from clinicaltrials.gov, the largest resource of clinical trial information. The text has been curated by members of the Open Targets organisation, a project aimed at providing data relevant to drug development. All 17 possible classes have been carefully defined: - Business_Administrative - Another_Study - Negative - Study_Design - Invalid_Reason - Ethical_Reason - Insufficient_Data - Insufficient_Enrollment - Study_Staff_Moved - Endpoint_Met - Regulatory - Logistics_Resources - Safety_Sideeffects - No_Context - Success - Interim_Analysis - Covid19 ### Supported Tasks and Leaderboards Multi class classification ### Languages English ## Dataset Structure ### Data Instances ```json {'text': 'Due to company decision to focus resources on a larger, controlled study in this patient population."', 'label': 'Another_Study'} ``` ### Data Fields `text`: contains the reason for the CT early stop `label`: contains one of the 17 defined classes ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information This dataset has an Apache 2.0 license. ### Citation Information [More Information Needed] ### Contributions Thanks to [@ireneisdoomed](https://github.com/<github-username>) for adding this dataset.
opentargets/clinical_trial_reason_to_stop
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:en", "license:apache-2.0", "bio", "research papers", "clinical trial", "drug development", "region:us" ]
2022-09-10T17:20:47+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "clinical_trial_reason_to_stop", "tags": ["bio", "research papers", "clinical trial", "drug development"]}
2022-12-12T08:57:19+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #bio #research papers #clinical trial #drug development #region-us
# Dataset Card for Clinical Trials's Reason to Stop ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Point of Contact: data@URL ### Dataset Summary This dataset contains a curated classification of more than 5000 reasons why a clinical trial has suffered an early stop. The text has been extracted from URL, the largest resource of clinical trial information. The text has been curated by members of the Open Targets organisation, a project aimed at providing data relevant to drug development. All 17 possible classes have been carefully defined: - Business_Administrative - Another_Study - Negative - Study_Design - Invalid_Reason - Ethical_Reason - Insufficient_Data - Insufficient_Enrollment - Study_Staff_Moved - Endpoint_Met - Regulatory - Logistics_Resources - Safety_Sideeffects - No_Context - Success - Interim_Analysis - Covid19 ### Supported Tasks and Leaderboards Multi class classification ### Languages English ## Dataset Structure ### Data Instances ### Data Fields 'text': contains the reason for the CT early stop 'label': contains one of the 17 defined classes ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information This dataset has an Apache 2.0 license. ### Contributions Thanks to @ireneisdoomed for adding this dataset.
[ "# Dataset Card for Clinical Trials's Reason to Stop", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Point of Contact: data@URL", "### Dataset Summary\n\nThis dataset contains a curated classification of more than 5000 reasons why a clinical trial has suffered an early stop.\nThe text has been extracted from URL, the largest resource of clinical trial information. The text has been curated by members of the Open Targets organisation, a project aimed at providing data relevant to drug development.\n\nAll 17 possible classes have been carefully defined:\n- Business_Administrative\n- Another_Study\n- Negative\n- Study_Design\n- Invalid_Reason\n- Ethical_Reason\n- Insufficient_Data\n- Insufficient_Enrollment\n- Study_Staff_Moved\n- Endpoint_Met\n- Regulatory\n- Logistics_Resources\n- Safety_Sideeffects\n- No_Context\n- Success\n- Interim_Analysis\n- Covid19", "### Supported Tasks and Leaderboards\n\nMulti class classification", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n'text': contains the reason for the CT early stop\n'label': contains one of the 17 defined classes", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis dataset has an Apache 2.0 license.", "### Contributions\n\nThanks to @ireneisdoomed for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-apache-2.0 #bio #research papers #clinical trial #drug development #region-us \n", "# Dataset Card for Clinical Trials's Reason to Stop", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Point of Contact: data@URL", "### Dataset Summary\n\nThis dataset contains a curated classification of more than 5000 reasons why a clinical trial has suffered an early stop.\nThe text has been extracted from URL, the largest resource of clinical trial information. The text has been curated by members of the Open Targets organisation, a project aimed at providing data relevant to drug development.\n\nAll 17 possible classes have been carefully defined:\n- Business_Administrative\n- Another_Study\n- Negative\n- Study_Design\n- Invalid_Reason\n- Ethical_Reason\n- Insufficient_Data\n- Insufficient_Enrollment\n- Study_Staff_Moved\n- Endpoint_Met\n- Regulatory\n- Logistics_Resources\n- Safety_Sideeffects\n- No_Context\n- Success\n- Interim_Analysis\n- Covid19", "### Supported Tasks and Leaderboards\n\nMulti class classification", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields\n\n'text': contains the reason for the CT early stop\n'label': contains one of the 17 defined classes", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nThis dataset has an Apache 2.0 license.", "### Contributions\n\nThanks to @ireneisdoomed for adding this dataset." ]
d05e581aa93337daa3728ba7ef4c9882221b1491
This is a floral dataset to train text inversion in Stable diffusion and being added here for future reference and additional implementation.
jags/floral
[ "license:mit", "region:us" ]
2022-09-10T17:41:30+00:00
{"license": "mit"}
2022-09-10T18:03:16+00:00
[]
[]
TAGS #license-mit #region-us
This is a floral dataset to train text inversion in Stable diffusion and being added here for future reference and additional implementation.
[]
[ "TAGS\n#license-mit #region-us \n" ]
93f548596663c5459ad33c179ae74e2d785ffbae
# Controlled Text Reduction This dataset contains Controlled Text Reduction triplets - document-summary pairs, and the spans in the document that cover the summary. The task input is consists of a document with pre-selected spans in it ("highlights"). The output is a text covering all and only the highlighted content. The script downloads the data from the original [GitHub repository](https://github.com/lovodkin93/Controlled_Text_Reduction). ### Format The dataset contains the following important features: * `doc_text` - the input text. * `summary_text` - the output text. * `highlight_spans` - the spans in the input text (the doc_text) that lead to the output text (the summary_text). ```json {'doc_text': 'The motion picture industry\'s most coveted award...with 32.', 'summary_text': 'The Oscar, created 60 years ago by MGM...awarded person (32).', 'highlight_spans':'[[0, 48], [50, 55], [57, 81], [184, 247], ..., [953, 975], [1033, 1081]]'} ``` where for each document-summary pair, we save the spans in the input document that lead to the summary. Notice that the dataset consists of two subsets: 1. `DUC-2001-2002` - which is further divided into 3 splits (train, validation and test). 2. `CNN-DM` - which has a single split. Citation ======== If you find the Controlled Text Reduction dataset useful in your research, please cite the following paper: ``` @misc{https://doi.org/10.48550/arxiv.2210.13449, doi = {10.48550/ARXIV.2210.13449}, url = {https://arxiv.org/abs/2210.13449}, author = {Slobodkin, Aviv and Roit, Paul and Hirsch, Eran and Ernst, Ori and Dagan, Ido}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Controlled Text Reduction}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Zero v1.0 Universal} } ```
biu-nlp/Controlled-Text-Reduction-dataset
[ "arxiv:2210.13449", "region:us" ]
2022-09-11T08:44:55+00:00
{}
2022-10-25T12:25:49+00:00
[ "2210.13449" ]
[]
TAGS #arxiv-2210.13449 #region-us
# Controlled Text Reduction This dataset contains Controlled Text Reduction triplets - document-summary pairs, and the spans in the document that cover the summary. The task input is consists of a document with pre-selected spans in it ("highlights"). The output is a text covering all and only the highlighted content. The script downloads the data from the original GitHub repository. ### Format The dataset contains the following important features: * 'doc_text' - the input text. * 'summary_text' - the output text. * 'highlight_spans' - the spans in the input text (the doc_text) that lead to the output text (the summary_text). where for each document-summary pair, we save the spans in the input document that lead to the summary. Notice that the dataset consists of two subsets: 1. 'DUC-2001-2002' - which is further divided into 3 splits (train, validation and test). 2. 'CNN-DM' - which has a single split. Citation ======== If you find the Controlled Text Reduction dataset useful in your research, please cite the following paper:
[ "# Controlled Text Reduction\n\nThis dataset contains Controlled Text Reduction triplets - document-summary pairs, and the spans in the document that cover the summary.\nThe task input is consists of a document with pre-selected spans in it (\"highlights\"). The output is a text covering all and only the highlighted content.\n\nThe script downloads the data from the original GitHub repository.", "### Format\n\nThe dataset contains the following important features:\n \n* 'doc_text' - the input text. \n* 'summary_text' - the output text. \n* 'highlight_spans' - the spans in the input text (the doc_text) that lead to the output text (the summary_text). \n\n\nwhere for each document-summary pair, we save the spans in the input document that lead to the summary. \n\nNotice that the dataset consists of two subsets:\n1. 'DUC-2001-2002' - which is further divided into 3 splits (train, validation and test).\n2. 'CNN-DM' - which has a single split.\n\nCitation\n========\nIf you find the Controlled Text Reduction dataset useful in your research, please cite the following paper:" ]
[ "TAGS\n#arxiv-2210.13449 #region-us \n", "# Controlled Text Reduction\n\nThis dataset contains Controlled Text Reduction triplets - document-summary pairs, and the spans in the document that cover the summary.\nThe task input is consists of a document with pre-selected spans in it (\"highlights\"). The output is a text covering all and only the highlighted content.\n\nThe script downloads the data from the original GitHub repository.", "### Format\n\nThe dataset contains the following important features:\n \n* 'doc_text' - the input text. \n* 'summary_text' - the output text. \n* 'highlight_spans' - the spans in the input text (the doc_text) that lead to the output text (the summary_text). \n\n\nwhere for each document-summary pair, we save the spans in the input document that lead to the summary. \n\nNotice that the dataset consists of two subsets:\n1. 'DUC-2001-2002' - which is further divided into 3 splits (train, validation and test).\n2. 'CNN-DM' - which has a single split.\n\nCitation\n========\nIf you find the Controlled Text Reduction dataset useful in your research, please cite the following paper:" ]
568efa79ccdda4c4aeda7f6e48220dc8cd7f3953
Dataset extracted from https://www.cdc.gov/coronavirus/2019-ncov/hcp/faq.html#Treatment-and-Management.
CShorten/CDC-COVID-FAQ
[ "license:afl-3.0", "region:us" ]
2022-09-11T14:42:18+00:00
{"license": "afl-3.0"}
2022-09-11T14:42:46+00:00
[]
[]
TAGS #license-afl-3.0 #region-us
Dataset extracted from URL
[]
[ "TAGS\n#license-afl-3.0 #region-us \n" ]
b8b8ebb1af25699dab8f6e630051823bc27ff875
# Artic Dataset This dataset was created using artic API, and the descriptions were scraped from the artic.edu website. The scraping code is shared at [github.com/abhisharsinha/gsoc](https://github.com/abhisharsinha/gsoc/) The images are hosted at this [google cloud bucket](https://storage.googleapis.com/mys-released-models/gsoc/artic-dataset.zip) The image filenames correspond to `image_id` in the tabular dataset. The description was only available for selected artworks. `full_description` is the whole text scraped from the description page. `description` is the first paragraph of the `full_description`.
abhishars/artic-dataset
[ "license:cc", "region:us" ]
2022-09-12T02:30:58+00:00
{"license": "cc"}
2023-01-05T14:41:46+00:00
[]
[]
TAGS #license-cc #region-us
# Artic Dataset This dataset was created using artic API, and the descriptions were scraped from the URL website. The scraping code is shared at URL The images are hosted at this google cloud bucket The image filenames correspond to 'image_id' in the tabular dataset. The description was only available for selected artworks. 'full_description' is the whole text scraped from the description page. 'description' is the first paragraph of the 'full_description'.
[ "# Artic Dataset\n\nThis dataset was created using artic API, and the descriptions were scraped from the URL website. The scraping code is shared at URL\n\nThe images are hosted at this google cloud bucket The image filenames correspond to 'image_id' in the tabular dataset.\n\nThe description was only available for selected artworks. 'full_description' is the whole text scraped from the description page. 'description' is the first paragraph of the 'full_description'." ]
[ "TAGS\n#license-cc #region-us \n", "# Artic Dataset\n\nThis dataset was created using artic API, and the descriptions were scraped from the URL website. The scraping code is shared at URL\n\nThe images are hosted at this google cloud bucket The image filenames correspond to 'image_id' in the tabular dataset.\n\nThe description was only available for selected artworks. 'full_description' is the whole text scraped from the description page. 'description' is the first paragraph of the 'full_description'." ]