sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
86a9aaf66354ef7537ceee351364693f948d8327
# Dataset Card for CodeQueries ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [How to use](#how-to-use) - [Data Splits and Data Fields](#data-splits-and-data-fields) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Data](https://huggingface.co/datasets/thepurpleowl/codequeries) - **Repository:** [Code](https://github.com/thepurpleowl/codequeries-benchmark) - **Paper:** ### Dataset Summary CodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning. ### Supported Tasks and Leaderboards Extractive question answering for code, semantic understanding of code. ### Languages The dataset contains code context from `python` files. ## Dataset Structure ### How to Use The dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code: ```python import datasets # in addition to `twostep`, the other supported settings are <ideal/file_ideal/prefix>. ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST) print(next(iter(ds))) #OUTPUT: {'query_name': 'Unused import', 'code_file_path': 'rcbops/glance-buildpackage/glance/tests/unit/test_db.py', 'context_block': {'content': '# vim: tabstop=4 shiftwidth=4 softtabstop=4\n\n# Copyright 2010-2011 OpenStack, LLC\ ...', 'metadata': 'root', 'header': "['module', '___EOS___']", 'index': 0}, 'answer_spans': [{'span': 'from glance.common import context', 'start_line': 19, 'start_column': 0, 'end_line': 19, 'end_column': 33} ], 'supporting_fact_spans': [], 'example_type': 1, 'single_hop': False, 'subtokenized_input_sequence': ['[CLS]_', 'Un', 'used_', 'import_', '[SEP]_', 'module_', '\\u\\u\\uEOS\\u\\u\\u_', '#', ' ', 'vim', ':', ...], 'label_sequence': [4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...], 'relevance_label': 1 } ``` ### Data Splits and Data Fields Detailed information on the data splits for proposed settings can be found in the paper. In general, data splits in all the proposed settings have examples with the following fields - ``` - query_name (query name to uniquely identify the query) - code_file_path (relative source file path w.r.t. ETH Py150 corpus) - context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field and `twostep` has `context_block`] - answer_spans (answer spans with metadata) - supporting_fact_spans (supporting-fact spans with metadata) - example_type (1(positive)) or 0(negative)) example type) - single_hop (True or False - for query type) - subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids] - label_sequence (example subtoken labels) - relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block) [only `twostep` setting has this field] ``` ## Dataset Creation The dataset is created using [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used. ## Additional Information ### Licensing Information The source code repositories used for preparing CodeQueries are based on the [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) and are redistributable under the respective licenses. A Huggingface dataset for ETH Py150 Open is available [here](https://huggingface.co/datasets/eth_py150_open). The labeling prepared and provided by us as part of CodeQueries is released under the Apache-2.0 license.
thepurpleowl/codequeries
[ "task_categories:question-answering", "task_ids:extractive-qa", "annotations_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:code", "license:apache-2.0", "neural modeling of code", "code question answering", "code semantic understanding", "region:us" ]
2022-08-24T08:27:43+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["code"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "codequeries", "tags": ["neural modeling of code", "code question answering", "code semantic understanding"]}
2023-06-03T11:50:46+00:00
[]
[ "code" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-code #license-apache-2.0 #neural modeling of code #code question answering #code semantic understanding #region-us
# Dataset Card for CodeQueries ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - How to use - Data Splits and Data Fields - Dataset Creation - Additional Information - Licensing Information - Citation Information ## Dataset Description - Homepage: Data - Repository: Code - Paper: ### Dataset Summary CodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning. ### Supported Tasks and Leaderboards Extractive question answering for code, semantic understanding of code. ### Languages The dataset contains code context from 'python' files. ## Dataset Structure ### How to Use The dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code: ### Data Splits and Data Fields Detailed information on the data splits for proposed settings can be found in the paper. In general, data splits in all the proposed settings have examples with the following fields - ## Dataset Creation The dataset is created using ETH Py150 Open dataset as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used. ## Additional Information ### Licensing Information The source code repositories used for preparing CodeQueries are based on the ETH Py150 Open dataset and are redistributable under the respective licenses. A Huggingface dataset for ETH Py150 Open is available here. The labeling prepared and provided by us as part of CodeQueries is released under the Apache-2.0 license.
[ "# Dataset Card for CodeQueries", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - How to use\n - Data Splits and Data Fields\n- Dataset Creation\n- Additional Information\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: Data\n- Repository: Code\n- Paper:", "### Dataset Summary\n\nCodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning.", "### Supported Tasks and Leaderboards\n\nExtractive question answering for code, semantic understanding of code.", "### Languages\n\nThe dataset contains code context from 'python' files.", "## Dataset Structure", "### How to Use\nThe dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:", "### Data Splits and Data Fields\nDetailed information on the data splits for proposed settings can be found in the paper.\n\nIn general, data splits in all the proposed settings have examples with the following fields -", "## Dataset Creation\n\nThe dataset is created using ETH Py150 Open dataset as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used.", "## Additional Information", "### Licensing Information\n\nThe source code repositories used for preparing CodeQueries are based on the ETH Py150 Open dataset and are redistributable under the respective licenses. A Huggingface dataset for ETH Py150 Open is available here. The labeling prepared and provided by us as part of CodeQueries is released under the Apache-2.0 license." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-code #license-apache-2.0 #neural modeling of code #code question answering #code semantic understanding #region-us \n", "# Dataset Card for CodeQueries", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - How to use\n - Data Splits and Data Fields\n- Dataset Creation\n- Additional Information\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: Data\n- Repository: Code\n- Paper:", "### Dataset Summary\n\nCodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning.", "### Supported Tasks and Leaderboards\n\nExtractive question answering for code, semantic understanding of code.", "### Languages\n\nThe dataset contains code context from 'python' files.", "## Dataset Structure", "### How to Use\nThe dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:", "### Data Splits and Data Fields\nDetailed information on the data splits for proposed settings can be found in the paper.\n\nIn general, data splits in all the proposed settings have examples with the following fields -", "## Dataset Creation\n\nThe dataset is created using ETH Py150 Open dataset as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used.", "## Additional Information", "### Licensing Information\n\nThe source code repositories used for preparing CodeQueries are based on the ETH Py150 Open dataset and are redistributable under the respective licenses. A Huggingface dataset for ETH Py150 Open is available here. The labeling prepared and provided by us as part of CodeQueries is released under the Apache-2.0 license." ]
[ 107, 8, 68, 17, 99, 25, 18, 6, 48, 51, 57, 5, 80 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-extractive-qa #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-code #license-apache-2.0 #neural modeling of code #code question answering #code semantic understanding #region-us \n# Dataset Card for CodeQueries## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - How to use\n - Data Splits and Data Fields\n- Dataset Creation\n- Additional Information\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Homepage: Data\n- Repository: Code\n- Paper:### Dataset Summary\n\nCodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning.### Supported Tasks and Leaderboards\n\nExtractive question answering for code, semantic understanding of code.### Languages\n\nThe dataset contains code context from 'python' files.## Dataset Structure### How to Use\nThe dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:### Data Splits and Data Fields\nDetailed information on the data splits for proposed settings can be found in the paper.\n\nIn general, data splits in all the proposed settings have examples with the following fields -## Dataset Creation\n\nThe dataset is created using ETH Py150 Open dataset as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used." ]
2bb2848a1beb37f03ba3b09eac4401c290df503e
## Bibtex ``` @article{greff2021kubric, title = {Kubric: a scalable dataset generator}, author = {Klaus Greff and Francois Belletti and Lucas Beyer and Carl Doersch and Yilun Du and Daniel Duckworth and David J Fleet and Dan Gnanapragasam and Florian Golemo and Charles Herrmann and Thomas Kipf and Abhijit Kundu and Dmitry Lagun and Issam Laradji and Hsueh-Ti (Derek) Liu and Henning Meyer and Yishu Miao and Derek Nowrouzezahrai and Cengiz Oztireli and Etienne Pot and Noha Radwan and Daniel Rebain and Sara Sabour and Mehdi S. M. Sajjadi and Matan Sela and Vincent Sitzmann and Austin Stone and Deqing Sun and Suhani Vora and Ziyu Wang and Tianhao Wu and Kwang Moo Yi and Fangcheng Zhong and Andrea Tagliasacchi}, booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2022}, } ``` # Kubric A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation masks, depth maps, and optical flow. ## Motivation and design We need better data for training and evaluating machine learning systems, especially in the collntext of unsupervised multi-object video understanding. Current systems succeed on [toy datasets](https://github.com/deepmind/multi_object_datasets), but fail on real-world data. Progress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand. Kubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends. ## Getting started For instructions, please refer to [https://kubric.readthedocs.io](https://kubric.readthedocs.io) Assuming you have docker installed, to generate the data above simply execute: ``` git clone https://github.com/google-research/kubric.git cd kubric docker pull kubricdockerhub/kubruntu docker run --rm --interactive \ --user $(id -u):$(id -g) \ --volume "$(pwd):/kubric" \ kubricdockerhub/kubruntu \ /usr/bin/python3 examples/helloworld.py ls output ``` Kubric employs **Blender 2.93** (see [here](https://github.com/google-research/kubric/blob/01a08d274234f32f2adc4f7d5666b39490f953ad/docker/Blender.Dockerfile#L48)), so if you want to inspect the generated `*.blend` scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version. ## Requirements - A pipeline for conveniently generating video data. - Physics simulation for automatically generating physical interactions between multiple objects. - Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures. - Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible. - Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties) - Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects) ## Challenges and datasets Generally, we store datasets for the challenges in this [Google Cloud Bucket](https://console.cloud.google.com/storage/browser/kubric-public). More specifically, these challenges are *dataset contributions* of the Kubric CVPR'22 paper: * [MOVi: Multi-Object Video](challenges/movi) * [Texture-Structure in NeRF](challenges/texture_structure_nerf) * [Optical Flow](challenges/optical_flow) * [Pre-training Visual Representations](challenges/pretraining_visual) * [Robust NeRF](challenges/robust_nerf) * [Multi-View Object Matting](challenges/multiview_matting) * [Complex BRDFs](challenges/complex_brdf) * [Single View Reconstruction](challenges/single_view_reconstruction) * [Video Based Reconstruction](challenges/video_based_reconstruction) * [Point Tracking](challenges/point_tracking) Pointers to additional datasets/workers: * [ToyBox (from Neural Semantic Fields)](https://nesf3d.github.io) * [MultiShapeNet (from Scene Representation Transformer)](https://srt-paper.github.io) * [SyntheticTrio(from Controllable Neural Radiance Fields)](https://github.com/kacperkan/conerf-kubric-dataset#readme) ## Disclaimer This is not an official Google Product
simulate-explorer/Example
[ "license:mit", "region:us" ]
2022-08-24T08:45:17+00:00
{"license": "mit"}
2022-08-29T10:34:36+00:00
[]
[]
TAGS #license-mit #region-us
## Bibtex # Kubric A data generation pipeline for creating semi-realistic synthetic multi-object videos with rich annotations such as instance segmentation masks, depth maps, and optical flow. ## Motivation and design We need better data for training and evaluating machine learning systems, especially in the collntext of unsupervised multi-object video understanding. Current systems succeed on toy datasets, but fail on real-world data. Progress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand. Kubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends. ## Getting started For instructions, please refer to URL Assuming you have docker installed, to generate the data above simply execute: Kubric employs Blender 2.93 (see here), so if you want to inspect the generated '*.blend' scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version. ## Requirements - A pipeline for conveniently generating video data. - Physics simulation for automatically generating physical interactions between multiple objects. - Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures. - Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible. - Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties) - Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects) ## Challenges and datasets Generally, we store datasets for the challenges in this Google Cloud Bucket. More specifically, these challenges are *dataset contributions* of the Kubric CVPR'22 paper: * MOVi: Multi-Object Video * Texture-Structure in NeRF * Optical Flow * Pre-training Visual Representations * Robust NeRF * Multi-View Object Matting * Complex BRDFs * Single View Reconstruction * Video Based Reconstruction * Point Tracking Pointers to additional datasets/workers: * ToyBox (from Neural Semantic Fields) * MultiShapeNet (from Scene Representation Transformer) * SyntheticTrio(from Controllable Neural Radiance Fields) ## Disclaimer This is not an official Google Product
[ "## Bibtex", "# Kubric\n\nA data generation pipeline for creating semi-realistic synthetic multi-object \nvideos with rich annotations such as instance segmentation masks, depth maps, \nand optical flow.", "## Motivation and design\nWe need better data for training and evaluating machine learning systems, especially in the collntext of unsupervised multi-object video understanding.\nCurrent systems succeed on toy datasets, but fail on real-world data.\nProgress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand.\nKubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends.", "## Getting started\nFor instructions, please refer to URL\n\nAssuming you have docker installed, to generate the data above simply execute:\n\n\nKubric employs Blender 2.93 (see here), so if you want to inspect the generated '*.blend' scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version.", "## Requirements\n- A pipeline for conveniently generating video data. \n- Physics simulation for automatically generating physical interactions between multiple objects.\n- Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures.\n- Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible. \n- Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties)\n- Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects)", "## Challenges and datasets\nGenerally, we store datasets for the challenges in this Google Cloud Bucket.\nMore specifically, these challenges are *dataset contributions* of the Kubric CVPR'22 paper:\n* MOVi: Multi-Object Video\n* Texture-Structure in NeRF\n* Optical Flow\n* Pre-training Visual Representations\n* Robust NeRF\n* Multi-View Object Matting\n* Complex BRDFs\n* Single View Reconstruction\n* Video Based Reconstruction\n* Point Tracking\n\nPointers to additional datasets/workers:\n* ToyBox (from Neural Semantic Fields)\n* MultiShapeNet (from Scene Representation Transformer)\n* SyntheticTrio(from Controllable Neural Radiance Fields)", "## Disclaimer\nThis is not an official Google Product" ]
[ "TAGS\n#license-mit #region-us \n", "## Bibtex", "# Kubric\n\nA data generation pipeline for creating semi-realistic synthetic multi-object \nvideos with rich annotations such as instance segmentation masks, depth maps, \nand optical flow.", "## Motivation and design\nWe need better data for training and evaluating machine learning systems, especially in the collntext of unsupervised multi-object video understanding.\nCurrent systems succeed on toy datasets, but fail on real-world data.\nProgress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand.\nKubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends.", "## Getting started\nFor instructions, please refer to URL\n\nAssuming you have docker installed, to generate the data above simply execute:\n\n\nKubric employs Blender 2.93 (see here), so if you want to inspect the generated '*.blend' scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version.", "## Requirements\n- A pipeline for conveniently generating video data. \n- Physics simulation for automatically generating physical interactions between multiple objects.\n- Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures.\n- Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible. \n- Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties)\n- Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects)", "## Challenges and datasets\nGenerally, we store datasets for the challenges in this Google Cloud Bucket.\nMore specifically, these challenges are *dataset contributions* of the Kubric CVPR'22 paper:\n* MOVi: Multi-Object Video\n* Texture-Structure in NeRF\n* Optical Flow\n* Pre-training Visual Representations\n* Robust NeRF\n* Multi-View Object Matting\n* Complex BRDFs\n* Single View Reconstruction\n* Video Based Reconstruction\n* Point Tracking\n\nPointers to additional datasets/workers:\n* ToyBox (from Neural Semantic Fields)\n* MultiShapeNet (from Scene Representation Transformer)\n* SyntheticTrio(from Controllable Neural Radiance Fields)", "## Disclaimer\nThis is not an official Google Product" ]
[ 11, 4, 44, 127, 88, 177, 164, 9 ]
[ "passage: TAGS\n#license-mit #region-us \n## Bibtex# Kubric\n\nA data generation pipeline for creating semi-realistic synthetic multi-object \nvideos with rich annotations such as instance segmentation masks, depth maps, \nand optical flow.## Motivation and design\nWe need better data for training and evaluating machine learning systems, especially in the collntext of unsupervised multi-object video understanding.\nCurrent systems succeed on toy datasets, but fail on real-world data.\nProgress could be greatly accelerated if we had the ability to create suitable datasets of varying complexity on demand.\nKubric is mainly built on-top of pybullet (for physics simulation) and Blender (for rendering); however, the code is kept modular to potentially support different rendering backends.## Getting started\nFor instructions, please refer to URL\n\nAssuming you have docker installed, to generate the data above simply execute:\n\n\nKubric employs Blender 2.93 (see here), so if you want to inspect the generated '*.blend' scene file for interactive inspection (i.e. without needing to render the scene), please make sure you have installed the correct Blender version.## Requirements\n- A pipeline for conveniently generating video data. \n- Physics simulation for automatically generating physical interactions between multiple objects.\n- Good control over the complexity of the generated data, so that we can evaluate individual aspects such as variability of objects and textures.\n- Realism: Ideally, the ability to span the entire complexity range from CLEVR all the way to real-world video such as YouTube8. This is clearly not feasible, but we would like to get as close as possible. \n- Access to rich ground truth information about the objects in a scene for the purpose of evaluation (eg. object segmentations and properties)\n- Control the train/test split to evaluate compositionality and systematic generalization (for example on held-out combinations of features or objects)" ]
738036ce5d904fdf2509ce44cd1d5d63b25582fa
This dataset is converted from duconv, durecdial, ecm, naturalconv, persona, tencent, kdconv, crosswoz,risawoz,diamante,restoration and LCCC-base 12 high quality datasets and is used for continue pretrain task for T5-pegasus in mengzi version.
Jaren/T5-dialogue-pretrain-data
[ "region:us" ]
2022-08-24T10:39:09+00:00
{}
2022-08-30T14:01:24+00:00
[]
[]
TAGS #region-us
This dataset is converted from duconv, durecdial, ecm, naturalconv, persona, tencent, kdconv, crosswoz,risawoz,diamante,restoration and LCCC-base 12 high quality datasets and is used for continue pretrain task for T5-pegasus in mengzi version.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
bd9f47f758affab100c81931d6afba84bab9ae06
Warning: We don't follow the standard of the hugging face, download and process files according to your own needs. There only contain the intra-sentence relationship.Gold is the positive from the original corpus.Positive is all the relationship intra-sentence.
dyhsup/ChemProt_CPR
[ "license:other", "region:us" ]
2022-08-24T12:05:55+00:00
{"license": "other"}
2022-08-31T11:09:31+00:00
[]
[]
TAGS #license-other #region-us
Warning: We don't follow the standard of the hugging face, download and process files according to your own needs. There only contain the intra-sentence relationship.Gold is the positive from the original corpus.Positive is all the relationship intra-sentence.
[]
[ "TAGS\n#license-other #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-other #region-us \n" ]
6f7dc71b8fd4e8aed7b04752b563c5edf84694c7
# Dataset Card for the EUR-Lex dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/ - **Repository:** http://nlp.cs.aueb.gr/software_and_datasets/EURLEX57K/ - **Paper:** https://www.aclweb.org/anthology/P19-1636/ - **Leaderboard:** N/A ### Dataset Summary EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old. EURLEX57K contains 57k legislative documents in English from EUR-Lex (https://eur-lex.europa.eu) with an average length of 727 words. Each document contains four major zones: - the header, which includes the title and name of the legal body enforcing the legal act; - the recitals, which are legal background references; and - the main body, usually organized in articles. **Labeling / Annotation** All the documents of the dataset have been annotated by the Publications Office of EU (https://publications.europa.eu/en) with multiple concepts from EUROVOC (http://eurovoc.europa.eu/). While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Supported Tasks and Leaderboards The dataset supports: **Multi-label Text Classification:** Given the text of a document, a model predicts the relevant EUROVOC concepts. **Few-shot and Zero-shot learning:** As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Languages All documents are written in English. ## Dataset Structure ### Data Instances ```json { "celex_id": "31979D0509", "title": "79/509/EEC: Council Decision of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain", "text": "COUNCIL DECISION of 24 May 1979 on financial aid from the Community for the eradication of African swine fever in Spain (79/509/EEC)\nTHE COUNCIL OF THE EUROPEAN COMMUNITIES\nHaving regard to the Treaty establishing the European Economic Community, and in particular Article 43 thereof,\nHaving regard to the proposal from the Commission (1),\nHaving regard to the opinion of the European Parliament (2),\nWhereas the Community should take all appropriate measures to protect itself against the appearance of African swine fever on its territory;\nWhereas to this end the Community has undertaken, and continues to undertake, action designed to contain outbreaks of this type of disease far from its frontiers by helping countries affected to reinforce their preventive measures ; whereas for this purpose Community subsidies have already been granted to Spain;\nWhereas these measures have unquestionably made an effective contribution to the protection of Community livestock, especially through the creation and maintenance of a buffer zone north of the river Ebro;\nWhereas, however, in the opinion of the Spanish authorities themselves, the measures so far implemented must be reinforced if the fundamental objective of eradicating the disease from the entire country is to be achieved;\nWhereas the Spanish authorities have asked the Community to contribute to the expenses necessary for the efficient implementation of a total eradication programme;\nWhereas a favourable response should be given to this request by granting aid to Spain, having regard to the undertaking given by that country to protect the Community against African swine fever and to eliminate completely this disease by the end of a five-year eradication plan;\nWhereas this eradication plan must include certain measures which guarantee the effectiveness of the action taken, and it must be possible to adapt these measures to developments in the situation by means of a procedure establishing close cooperation between the Member States and the Commission;\nWhereas it is necessary to keep the Member States regularly informed as to the progress of the action undertaken,", "eurovoc_concepts": ["192", "2356", "2560", "862", "863"] } ``` ### Data Fields The following data fields are provided for documents (`train`, `dev`, `test`): `celex_id`: (**str**) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR.\ `title`: (**str**) The title of the document.\ `text`: (**str**) The full content of each document, which is represented by its `header`, `recitals` and `main_body`.\ `eurovoc_concepts`: (**List[str]**) The relevant EUROVOC concepts (labels). If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: https://archive.org/download/EURLEX57K/eurovoc_concepts.jsonl ```python import json with open('./eurovoc_concepts.jsonl') as jsonl_file: eurovoc_concepts = {json.loads(concept) for concept in jsonl_file.readlines()} ``` ### Data Splits | Split | No of Documents | Avg. words | Avg. labels | | ------------------- | ------------------------------------ | --- | --- | | Train | 45,000 | 729 | 5 | |Development | 6,000 | 714 | 5 | |Test | 6,000 | 725 | 5 | ## Dataset Creation ### Curation Rationale The dataset was curated by Chalkidis et al. (2019).\ The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en). ### Source Data #### Initial Data Collection and Normalization The original data are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed format. The documents were downloaded from EUR-Lex portal in HTML format. The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (http://publications.europa.eu/webapi/rdf/sparql). #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process * The original documents are available at EUR-Lex portal (https://eur-lex.europa.eu) in an unprocessed HTML format. The HTML code was striped and the documents split into sections. * The documents have been annotated by the Publications Office of EU (https://publications.europa.eu/en). #### Who are the annotators? Publications Office of EU (https://publications.europa.eu/en) ### Personal and Sensitive Information The dataset does not include personal or sensitive information. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators Chalkidis et al. (2019) ### Licensing Information ยฉ European Union, 1998-2021 The Commissionโ€™s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes. The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://eur-lex.europa.eu/content/legal-notice/legal-notice.html \ Read more: https://eur-lex.europa.eu/content/help/faq/reuse-contents-eurlex.html ### Citation Information *Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.* *Large-Scale Multi-Label Text Classification on EU Legislation.* *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019* ``` @inproceedings{chalkidis-etal-2019-large, title = "Large-Scale Multi-Label Text Classification on {EU} Legislation", author = "Chalkidis, Ilias and Fergadiotis, Manos and Malakasiotis, Prodromos and Androutsopoulos, Ion", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1636", doi = "10.18653/v1/P19-1636", pages = "6314--6322" } ``` ### Contributions Thanks to [@iliaschalkidis](https://github.com/iliaschalkidis) for adding this dataset.
jonathanli/eurlex
[ "task_categories:text-classification", "task_ids:multi-label-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:cc-by-sa-4.0", "legal-topic-classification", "region:us" ]
2022-08-24T14:28:36+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-label-classification"], "paperswithcode_id": "eurlex57k", "pretty_name": "the EUR-Lex dataset", "tags": ["legal-topic-classification"]}
2022-10-24T14:26:49+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #legal-topic-classification #region-us
Dataset Card for the EUR-Lex dataset ==================================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: URL * Leaderboard: N/A ### Dataset Summary EURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old. EURLEX57K contains 57k legislative documents in English from EUR-Lex (URL) with an average length of 727 words. Each document contains four major zones: * the header, which includes the title and name of the legal body enforcing the legal act; * the recitals, which are legal background references; and * the main body, usually organized in articles. Labeling / Annotation All the documents of the dataset have been annotated by the Publications Office of EU (URL with multiple concepts from EUROVOC (URL While EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Supported Tasks and Leaderboards The dataset supports: Multi-label Text Classification: Given the text of a document, a model predicts the relevant EUROVOC concepts. Few-shot and Zero-shot learning: As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively. ### Languages All documents are written in English. Dataset Structure ----------------- ### Data Instances ### Data Fields The following data fields are provided for documents ('train', 'dev', 'test'): 'celex\_id': (str) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR. 'title': (str) The title of the document. 'text': (str) The full content of each document, which is represented by its 'header', 'recitals' and 'main\_body'. 'eurovoc\_concepts': (List[str]) The relevant EUROVOC concepts (labels). If you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: URL ### Data Splits Dataset Creation ---------------- ### Curation Rationale The dataset was curated by Chalkidis et al. (2019). The documents have been annotated by the Publications Office of EU (URL ### Source Data #### Initial Data Collection and Normalization The original data are available at EUR-Lex portal (URL) in an unprocessed format. The documents were downloaded from EUR-Lex portal in HTML format. The relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (URL #### Who are the source language producers? ### Annotations #### Annotation process * The original documents are available at EUR-Lex portal (URL) in an unprocessed HTML format. The HTML code was striped and the documents split into sections. * The documents have been annotated by the Publications Office of EU (URL #### Who are the annotators? Publications Office of EU (URL ### Personal and Sensitive Information The dataset does not include personal or sensitive information. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators Chalkidis et al. (2019) ### Licensing Information ยฉ European Union, 1998-2021 The Commissionโ€™s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes. The copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: URL Read more: URL *Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.* *Large-Scale Multi-Label Text Classification on EU Legislation.* *Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019* ### Contributions Thanks to @iliaschalkidis for adding this dataset.
[ "### Dataset Summary\n\n\nEURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.\nEURLEX57K contains 57k legislative documents in English from EUR-Lex (URL) with an average length of 727 words. Each document contains four major zones:\n\n\n* the header, which includes the title and name of the legal body enforcing the legal act;\n* the recitals, which are legal background references; and\n* the main body, usually organized in articles.\n\n\nLabeling / Annotation\n\n\nAll the documents of the dataset have been annotated by the Publications Office of EU (URL with multiple concepts from EUROVOC (URL\nWhile EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nMulti-label Text Classification: Given the text of a document, a model predicts the relevant EUROVOC concepts.\n\n\nFew-shot and Zero-shot learning: As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nThe following data fields are provided for documents ('train', 'dev', 'test'):\n\n\n'celex\\_id': (str) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR. \n\n'title': (str) The title of the document. \n\n'text': (str) The full content of each document, which is represented by its 'header', 'recitals' and 'main\\_body'. \n\n'eurovoc\\_concepts': (List[str]) The relevant EUROVOC concepts (labels).\n\n\nIf you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: URL", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2019). \n\nThe documents have been annotated by the Publications Office of EU (URL", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe original data are available at EUR-Lex portal (URL) in an unprocessed format.\nThe documents were downloaded from EUR-Lex portal in HTML format.\nThe relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (URL", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\n* The original documents are available at EUR-Lex portal (URL) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.\n* The documents have been annotated by the Publications Office of EU (URL", "#### Who are the annotators?\n\n\nPublications Office of EU (URL", "### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nChalkidis et al. (2019)", "### Licensing Information\n\n\nยฉ European Union, 1998-2021\n\n\nThe Commissionโ€™s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.\n\n\nThe copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.\n\n\nSource: URL \n\nRead more: URL\n\n\n*Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.*\n*Large-Scale Multi-Label Text Classification on EU Legislation.*\n*Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019*", "### Contributions\n\n\nThanks to @iliaschalkidis for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #legal-topic-classification #region-us \n", "### Dataset Summary\n\n\nEURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.\nEURLEX57K contains 57k legislative documents in English from EUR-Lex (URL) with an average length of 727 words. Each document contains four major zones:\n\n\n* the header, which includes the title and name of the legal body enforcing the legal act;\n* the recitals, which are legal background references; and\n* the main body, usually organized in articles.\n\n\nLabeling / Annotation\n\n\nAll the documents of the dataset have been annotated by the Publications Office of EU (URL with multiple concepts from EUROVOC (URL\nWhile EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nMulti-label Text Classification: Given the text of a document, a model predicts the relevant EUROVOC concepts.\n\n\nFew-shot and Zero-shot learning: As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------", "### Data Instances", "### Data Fields\n\n\nThe following data fields are provided for documents ('train', 'dev', 'test'):\n\n\n'celex\\_id': (str) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR. \n\n'title': (str) The title of the document. \n\n'text': (str) The full content of each document, which is represented by its 'header', 'recitals' and 'main\\_body'. \n\n'eurovoc\\_concepts': (List[str]) The relevant EUROVOC concepts (labels).\n\n\nIf you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: URL", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2019). \n\nThe documents have been annotated by the Publications Office of EU (URL", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nThe original data are available at EUR-Lex portal (URL) in an unprocessed format.\nThe documents were downloaded from EUR-Lex portal in HTML format.\nThe relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (URL", "#### Who are the source language producers?", "### Annotations", "#### Annotation process\n\n\n* The original documents are available at EUR-Lex portal (URL) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.\n* The documents have been annotated by the Publications Office of EU (URL", "#### Who are the annotators?\n\n\nPublications Office of EU (URL", "### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nChalkidis et al. (2019)", "### Licensing Information\n\n\nยฉ European Union, 1998-2021\n\n\nThe Commissionโ€™s document reuse policy is based on Decision 2011/833/EU. Unless otherwise specified, you can re-use the legal documents published in EUR-Lex for commercial or non-commercial purposes.\n\n\nThe copyright for the editorial content of this website, the summaries of EU legislation and the consolidated texts, which is owned by the EU, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made.\n\n\nSource: URL \n\nRead more: URL\n\n\n*Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis and Ion Androutsopoulos.*\n*Large-Scale Multi-Label Text Classification on EU Legislation.*\n*Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019). Florence, Italy. 2019*", "### Contributions\n\n\nThanks to @iliaschalkidis for adding this dataset." ]
[ 97, 326, 120, 18, 6, 180, 11, 39, 4, 76, 10, 5, 59, 16, 29, 7, 8, 14, 15, 219, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-label-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-sa-4.0 #legal-topic-classification #region-us \n### Dataset Summary\n\n\nEURLEX57K can be viewed as an improved version of the dataset released by Mencia and Furnkranzand (2007), which has been widely used in Large-scale Multi-label Text Classification (LMTC) research, but is less than half the size of EURLEX57K (19.6k documents, 4k EUROVOC labels) and more than ten years old.\nEURLEX57K contains 57k legislative documents in English from EUR-Lex (URL) with an average length of 727 words. Each document contains four major zones:\n\n\n* the header, which includes the title and name of the legal body enforcing the legal act;\n* the recitals, which are legal background references; and\n* the main body, usually organized in articles.\n\n\nLabeling / Annotation\n\n\nAll the documents of the dataset have been annotated by the Publications Office of EU (URL with multiple concepts from EUROVOC (URL\nWhile EUROVOC includes approx. 7k concepts (labels), only 4,271 (59.31%) are present in EURLEX57K, from which only 2,049 (47.97%) have been assigned to more than 10 documents. The 4,271 labels are also divided into frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.", "passage: ### Supported Tasks and Leaderboards\n\n\nThe dataset supports:\n\n\nMulti-label Text Classification: Given the text of a document, a model predicts the relevant EUROVOC concepts.\n\n\nFew-shot and Zero-shot learning: As already noted, the labels can be divided into three groups: frequent (746 labels), few-shot (3,362), and zero- shot (163), depending on whether they were assigned to more than 50, fewer than 50 but at least one, or no training documents, respectively.### Languages\n\n\nAll documents are written in English.\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields\n\n\nThe following data fields are provided for documents ('train', 'dev', 'test'):\n\n\n'celex\\_id': (str) The official ID of the document. The CELEX number is the unique identifier for all publications in both Eur-Lex and CELLAR. \n\n'title': (str) The title of the document. \n\n'text': (str) The full content of each document, which is represented by its 'header', 'recitals' and 'main\\_body'. \n\n'eurovoc\\_concepts': (List[str]) The relevant EUROVOC concepts (labels).\n\n\nIf you want to use the descriptors of EUROVOC concepts, similar to Chalkidis et al. (2020), please load: URL### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nThe dataset was curated by Chalkidis et al. (2019). \n\nThe documents have been annotated by the Publications Office of EU (URL### Source Data#### Initial Data Collection and Normalization\n\n\nThe original data are available at EUR-Lex portal (URL) in an unprocessed format.\nThe documents were downloaded from EUR-Lex portal in HTML format.\nThe relevant metadata and EUROVOC concepts were downloaded from the SPARQL endpoint of the Publications Office of EU (URL#### Who are the source language producers?### Annotations#### Annotation process\n\n\n* The original documents are available at EUR-Lex portal (URL) in an unprocessed HTML format. The HTML code was striped and the documents split into sections.\n* The documents have been annotated by the Publications Office of EU (URL#### Who are the annotators?\n\n\nPublications Office of EU (URL### Personal and Sensitive Information\n\n\nThe dataset does not include personal or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nChalkidis et al. (2019)" ]
816621ee6b2c082e5e1062a5bad126feb81b9449
HF version of Edinburgh-NLP's [Code docstrings corpus](https://github.com/EdinburghNLP/code-docstring-corpus)
teven/code_docstring_corpus
[ "region:us" ]
2022-08-24T15:04:17+00:00
{}
2022-08-24T19:01:58+00:00
[]
[]
TAGS #region-us
HF version of Edinburgh-NLP's Code docstrings corpus
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
1d750cb1af1c154e447d6baa330110933105a600
HF-datasets version of Deepmind's [code_contests](https://github.com/deepmind/code_contests) dataset, notably used for AlphaGo. 1 row per solution, no test data or incorrect solutions included (only name/source/description/solution/language/difficulty)
teven/code_contests
[ "region:us" ]
2022-08-24T16:28:47+00:00
{}
2022-08-24T19:01:04+00:00
[]
[]
TAGS #region-us
HF-datasets version of Deepmind's code_contests dataset, notably used for AlphaGo. 1 row per solution, no test data or incorrect solutions included (only name/source/description/solution/language/difficulty)
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
059b500407cd10d3d0254d9c143d353f89ed7271
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: FardinSaboori/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450106
[ "autotrain", "evaluation", "region:us" ]
2022-08-24T19:33:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "FardinSaboori/bert-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-24T19:36:33+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: FardinSaboori/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ahmetgunduz for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: FardinSaboori/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: FardinSaboori/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ 13, 92, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: FardinSaboori/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
00f6010354dc41b964436402e91548d954663e01
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: 21iridescent/distilbert-base-uncased-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450107
[ "autotrain", "evaluation", "region:us" ]
2022-08-24T19:34:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/distilbert-base-uncased-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-24T19:37:00+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: 21iridescent/distilbert-base-uncased-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ahmetgunduz for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/distilbert-base-uncased-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/distilbert-base-uncased-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ 13, 99, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/distilbert-base-uncased-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
d2e7a920820db43013d54b67ef1fc315cb5f55cb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: Aiyshwariya/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate/autoeval-eval-project-squad-54745b0c-1311450108
[ "autotrain", "evaluation", "region:us" ]
2022-08-24T19:35:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Aiyshwariya/bert-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-24T19:37:49+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: Aiyshwariya/bert-finetuned-squad * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ahmetgunduz for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Aiyshwariya/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Aiyshwariya/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ 13, 91, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Aiyshwariya/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
f7396bc0d39f208076d0d8af13b4644dc3bdd7f8
# Digital Peter The Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. Paper is available at http://arxiv.org/abs/2103.09354 ## Description Digital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History (N.P.Lihachov mansion) of Russian Academy of Sciences, Federal Archival Agency of Russia and Russian State Archive of Ancient Acts. A detailed description of the problem (with an immersion in the problem) can be found in [detailed_description_of_the_task_en.pdf](https://github.com/sberbank-ai/digital_peter_aij2020/blob/master/desc/detailed_description_of_the_task_en.pdf) The dataset consists of 662 full page images and 9696 annotated text files. There are 265788 symbols and approximately 50998 words. ## Annotation format The annotation is in COCO format. The `annotation.json` should have the following dictionaries: - `annotation["categories"]` - a list of dicts with a categories info (categotiy names and indexes). - `annotation["images"]` - a list of dictionaries with a description of images, each dictionary must contain fields: - `file_name` - name of the image file. - `id` for image id. - `annotation["annotations"]` - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - `image_id` - the index of the image on which the polygon is located. - `category_id` - the polygonโ€™s category index. - ```attributes``` - dict with some additional annotatioin information. In the `translation` subdict you can find text translation for the line. - `segmentation` - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. ## Competition We held a competition based on Digital Peter dataset. Here is github [link](https://github.com/sberbank-ai/digital_peter_aij2020). Here is competition [page](https://ods.ai/tracks/aij2020) (need to register).
ai-forever/Peter
[ "task_categories:image-segmentation", "task_categories:object-detection", "source_datasets:original", "language:ru", "license:mit", "optical-character-recognition", "text-detection", "ocr", "arxiv:2103.09354", "region:us" ]
2022-08-25T09:03:42+00:00
{"language": ["ru"], "license": ["mit"], "source_datasets": ["original"], "task_categories": ["image-segmentation", "object-detection"], "task_ids": [], "tags": ["optical-character-recognition", "text-detection", "ocr"]}
2022-10-25T10:09:06+00:00
[ "2103.09354" ]
[ "ru" ]
TAGS #task_categories-image-segmentation #task_categories-object-detection #source_datasets-original #language-Russian #license-mit #optical-character-recognition #text-detection #ocr #arxiv-2103.09354 #region-us
# Digital Peter The Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages. Paper is available at URL ## Description Digital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History (N.P.Lihachov mansion) of Russian Academy of Sciences, Federal Archival Agency of Russia and Russian State Archive of Ancient Acts. A detailed description of the problem (with an immersion in the problem) can be found in detailed_description_of_the_task_en.pdf The dataset consists of 662 full page images and 9696 annotated text files. There are 265788 symbols and approximately 50998 words. ## Annotation format The annotation is in COCO format. The 'URL' should have the following dictionaries: - 'annotation["categories"]' - a list of dicts with a categories info (categotiy names and indexes). - 'annotation["images"]' - a list of dictionaries with a description of images, each dictionary must contain fields: - 'file_name' - name of the image file. - 'id' for image id. - 'annotation["annotations"]' - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields: - 'image_id' - the index of the image on which the polygon is located. - 'category_id' - the polygonโ€™s category index. - - dict with some additional annotatioin information. In the 'translation' subdict you can find text translation for the line. - 'segmentation' - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y. ## Competition We held a competition based on Digital Peter dataset. Here is github link. Here is competition page (need to register).
[ "# Digital Peter\n\nThe Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.\n\nPaper is available at URL", "## Description\n\nDigital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History (N.P.Lihachov mansion) of Russian Academy of Sciences, Federal Archival Agency of Russia and Russian State Archive of Ancient Acts.\n\nA detailed description of the problem (with an immersion in the problem) can be found in detailed_description_of_the_task_en.pdf\n\nThe dataset consists of 662 full page images and 9696 annotated text files. There are 265788 symbols and approximately 50998 words.", "## Annotation format\n\nThe annotation is in COCO format. The 'URL' should have the following dictionaries:\n\n- 'annotation[\"categories\"]' - a list of dicts with a categories info (categotiy names and indexes).\n- 'annotation[\"images\"]' - a list of dictionaries with a description of images, each dictionary must contain fields:\n - 'file_name' - name of the image file.\n - 'id' for image id.\n- 'annotation[\"annotations\"]' - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:\n - 'image_id' - the index of the image on which the polygon is located.\n - 'category_id' - the polygonโ€™s category index.\n - - dict with some additional annotatioin information. In the 'translation' subdict you can find text translation for the line.\n - 'segmentation' - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.", "## Competition\n\nWe held a competition based on Digital Peter dataset.\nHere is github link. Here is competition page (need to register)." ]
[ "TAGS\n#task_categories-image-segmentation #task_categories-object-detection #source_datasets-original #language-Russian #license-mit #optical-character-recognition #text-detection #ocr #arxiv-2103.09354 #region-us \n", "# Digital Peter\n\nThe Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.\n\nPaper is available at URL", "## Description\n\nDigital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History (N.P.Lihachov mansion) of Russian Academy of Sciences, Federal Archival Agency of Russia and Russian State Archive of Ancient Acts.\n\nA detailed description of the problem (with an immersion in the problem) can be found in detailed_description_of_the_task_en.pdf\n\nThe dataset consists of 662 full page images and 9696 annotated text files. There are 265788 symbols and approximately 50998 words.", "## Annotation format\n\nThe annotation is in COCO format. The 'URL' should have the following dictionaries:\n\n- 'annotation[\"categories\"]' - a list of dicts with a categories info (categotiy names and indexes).\n- 'annotation[\"images\"]' - a list of dictionaries with a description of images, each dictionary must contain fields:\n - 'file_name' - name of the image file.\n - 'id' for image id.\n- 'annotation[\"annotations\"]' - a list of dictioraties with a murkup information. Each dictionary stores a description for one polygon from the dataset, and must contain the following fields:\n - 'image_id' - the index of the image on which the polygon is located.\n - 'category_id' - the polygonโ€™s category index.\n - - dict with some additional annotatioin information. In the 'translation' subdict you can find text translation for the line.\n - 'segmentation' - the coordinates of the polygon, a list of numbers - which are coordinate pairs x and y.", "## Competition\n\nWe held a competition based on Digital Peter dataset.\nHere is github link. Here is competition page (need to register)." ]
[ 75, 68, 148, 261, 29 ]
[ "passage: TAGS\n#task_categories-image-segmentation #task_categories-object-detection #source_datasets-original #language-Russian #license-mit #optical-character-recognition #text-detection #ocr #arxiv-2103.09354 #region-us \n# Digital Peter\n\nThe Peter dataset can be used for reading texts from the manuscripts written by Peter the Great. The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.\n\nPaper is available at URL## Description\n\nDigital Peter is an educational task with a historical slant created on the basis of several AI technologies (Computer Vision, NLP, and knowledge graphs). The task was prepared jointly with the Saint Petersburg Institute of History (N.P.Lihachov mansion) of Russian Academy of Sciences, Federal Archival Agency of Russia and Russian State Archive of Ancient Acts.\n\nA detailed description of the problem (with an immersion in the problem) can be found in detailed_description_of_the_task_en.pdf\n\nThe dataset consists of 662 full page images and 9696 annotated text files. There are 265788 symbols and approximately 50998 words." ]
872656a156f32e4058307e50e234a44a727a9503
# Dataset Card for Wiki Toxic ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary The Wiki Toxic dataset is a modified, cleaned version of the dataset used in the [Kaggle Toxic Comment Classification challenge](https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/overview) from 2017/18. The dataset contains comments collected from Wikipedia forums and classifies them into two categories, `toxic` and `non-toxic`. The Kaggle dataset was cleaned using the included `clean.py` file. ### Supported Tasks and Leaderboards - Text Classification: the dataset can be used for training a model to recognise toxicity in sentences and classify them accordingly. ### Languages The sole language used in the dataset is English. ## Dataset Structure ### Data Instances For each data point, there is an id, the comment_text itself, and a label (0 for non-toxic, 1 for toxic). ``` {'id': 'a123a58f610cffbc', 'comment_text': '"This article SUCKS. It may be poorly written, poorly formatted, or full of pointless crap that no one cares about, and probably all of the above. If it can be rewritten into something less horrible, please, for the love of God, do so, before the vacuum caused by its utter lack of quality drags the rest of Wikipedia down into a bottomless pit of mediocrity."', 'label': 1} ``` ### Data Fields - `id`: A unique identifier string for each comment - `comment_text`: A string containing the text of the comment - `label`: An integer, either 0 if the comment is non-toxic, or 1 if the comment is toxic ### Data Splits The Wiki Toxic dataset has three splits: *train*, *validation*, and *test*. The statistics for each split are below: | Dataset Split | Number of data points in split | | ----------- | ----------- | | Train | 127,656 | | Validation | 31,915 | | Test | 63,978 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
OxAISH-AL-LLM/wiki_toxic
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "annotations_creators:crowdsourced", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|other", "language:en", "license:cc0-1.0", "wikipedia", "toxicity", "toxic comments", "region:us" ]
2022-08-25T11:59:12+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|other"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection"], "pretty_name": "Toxic Wikipedia Comments", "tags": ["wikipedia", "toxicity", "toxic comments"]}
2022-09-19T14:53:19+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|other #language-English #license-cc0-1.0 #wikipedia #toxicity #toxic comments #region-us
Dataset Card for Wiki Toxic =========================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: * Repository: * Paper: * Leaderboard: * Point of Contact: ### Dataset Summary The Wiki Toxic dataset is a modified, cleaned version of the dataset used in the Kaggle Toxic Comment Classification challenge from 2017/18. The dataset contains comments collected from Wikipedia forums and classifies them into two categories, 'toxic' and 'non-toxic'. The Kaggle dataset was cleaned using the included 'URL' file. ### Supported Tasks and Leaderboards * Text Classification: the dataset can be used for training a model to recognise toxicity in sentences and classify them accordingly. ### Languages The sole language used in the dataset is English. Dataset Structure ----------------- ### Data Instances For each data point, there is an id, the comment\_text itself, and a label (0 for non-toxic, 1 for toxic). ### Data Fields * 'id': A unique identifier string for each comment * 'comment\_text': A string containing the text of the comment * 'label': An integer, either 0 if the comment is non-toxic, or 1 if the comment is toxic ### Data Splits The Wiki Toxic dataset has three splits: *train*, *validation*, and *test*. The statistics for each split are below: Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "### Dataset Summary\n\n\nThe Wiki Toxic dataset is a modified, cleaned version of the dataset used in the Kaggle Toxic Comment Classification challenge from 2017/18. The dataset contains comments collected from Wikipedia forums and classifies them into two categories, 'toxic' and 'non-toxic'.\n\n\nThe Kaggle dataset was cleaned using the included 'URL' file.", "### Supported Tasks and Leaderboards\n\n\n* Text Classification: the dataset can be used for training a model to recognise toxicity in sentences and classify them accordingly.", "### Languages\n\n\nThe sole language used in the dataset is English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor each data point, there is an id, the comment\\_text itself, and a label (0 for non-toxic, 1 for toxic).", "### Data Fields\n\n\n* 'id': A unique identifier string for each comment\n* 'comment\\_text': A string containing the text of the comment\n* 'label': An integer, either 0 if the comment is non-toxic, or 1 if the comment is toxic", "### Data Splits\n\n\nThe Wiki Toxic dataset has three splits: *train*, *validation*, and *test*. The statistics for each split are below:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|other #language-English #license-cc0-1.0 #wikipedia #toxicity #toxic comments #region-us \n", "### Dataset Summary\n\n\nThe Wiki Toxic dataset is a modified, cleaned version of the dataset used in the Kaggle Toxic Comment Classification challenge from 2017/18. The dataset contains comments collected from Wikipedia forums and classifies them into two categories, 'toxic' and 'non-toxic'.\n\n\nThe Kaggle dataset was cleaned using the included 'URL' file.", "### Supported Tasks and Leaderboards\n\n\n* Text Classification: the dataset can be used for training a model to recognise toxicity in sentences and classify them accordingly.", "### Languages\n\n\nThe sole language used in the dataset is English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nFor each data point, there is an id, the comment\\_text itself, and a label (0 for non-toxic, 1 for toxic).", "### Data Fields\n\n\n* 'id': A unique identifier string for each comment\n* 'comment\\_text': A string containing the text of the comment\n* 'label': An integer, either 0 if the comment is non-toxic, or 1 if the comment is toxic", "### Data Splits\n\n\nThe Wiki Toxic dataset has three splits: *train*, *validation*, and *test*. The statistics for each split are below:\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @github-username for adding this dataset." ]
[ 104, 88, 42, 22, 37, 64, 48, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|other #language-English #license-cc0-1.0 #wikipedia #toxicity #toxic comments #region-us \n### Dataset Summary\n\n\nThe Wiki Toxic dataset is a modified, cleaned version of the dataset used in the Kaggle Toxic Comment Classification challenge from 2017/18. The dataset contains comments collected from Wikipedia forums and classifies them into two categories, 'toxic' and 'non-toxic'.\n\n\nThe Kaggle dataset was cleaned using the included 'URL' file.### Supported Tasks and Leaderboards\n\n\n* Text Classification: the dataset can be used for training a model to recognise toxicity in sentences and classify them accordingly.### Languages\n\n\nThe sole language used in the dataset is English.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nFor each data point, there is an id, the comment\\_text itself, and a label (0 for non-toxic, 1 for toxic).### Data Fields\n\n\n* 'id': A unique identifier string for each comment\n* 'comment\\_text': A string containing the text of the comment\n* 'label': An integer, either 0 if the comment is non-toxic, or 1 if the comment is toxic### Data Splits\n\n\nThe Wiki Toxic dataset has three splits: *train*, *validation*, and *test*. The statistics for each split are below:\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------" ]
54ee2d8c64d3d80a5e10ef6952a4466551834fc1
# Dataset Card for COYO-700M ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [COYO homepage](https://kakaobrain.com/contents/?contentId=7eca73e3-3089-43cb-b701-332e8a1743fd) - **Repository:** [COYO repository](https://github.com/kakaobrain/coyo-dataset) - **Paper:** - **Leaderboard:** - **Point of Contact:** [COYO email]([email protected]) ### Dataset Summary **COYO-700M** is a large-scale dataset that contains **747M image-text pairs** as well as many other **meta-attributes** to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models complementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later. ### Supported Tasks and Leaderboards We empirically validated the quality of COYO dataset by re-implementing popular models such as [ALIGN](https://arxiv.org/abs/2102.05918), [unCLIP](https://arxiv.org/abs/2204.06125), and [ViT](https://arxiv.org/abs/2010.11929). We trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers. Our pre-trained models and training codes will be released soon along with the technical paper. ### Languages The texts in the COYO-700M dataset consist of English. ## Dataset Structure ### Data Instances Each instance in COYO-700M represents single image-text pair information with meta-attributes: ``` { 'id': 841814333321, 'url': 'https://blog.dogsof.com/wp-content/uploads/2021/03/Image-from-iOS-5-e1614711641382.jpg', 'text': 'A Pomsky dog sitting and smiling in field of orange flowers', 'width': 1000, 'height': 988, 'image_phash': 'c9b6a7d8469c1959', 'text_length': 59, 'word_count': 11, 'num_tokens_bert': 13, 'num_tokens_gpt': 12, 'num_faces': 0, 'clip_similarity_vitb32': 0.4296875, 'clip_similarity_vitl14': 0.35205078125, 'nsfw_score_opennsfw2': 0.00031447410583496094, 'nsfw_score_gantman': 0.03298913687467575, 'watermark_score': 0.1014641746878624, 'aesthetic_score_laion_v2': 5.435476303100586 } ``` ### Data Fields | name | type | description | |--------------------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | id | long | Unique 64-bit integer ID generated by [monotonically_increasing_id()](https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.functions.monotonically_increasing_id.html) | | url | string | The image URL extracted from the `src` attribute of the `<img>` tag | | text | string | The text extracted from the `alt` attribute of the `<img>` tag | | width | integer | The width of the image | | height | integer | The height of the image | | image_phash | string | The [perceptual hash(pHash)](http://www.phash.org/) of the image | | text_length | integer | The length of the text | | word_count | integer | The number of words separated by spaces. | | num_tokens_bert | integer | The number of tokens using [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer) | | num_tokens_gpt | integer | The number of tokens using [GPT2TokenizerFast](https://huggingface.co/docs/transformers/model_doc/gpt2#transformers.GPT2TokenizerFast) | | num_faces | integer | The number of faces in the image detected by [SCRFD](https://insightface.ai/scrfd) | | clip_similarity_vitb32 | float | The cosine similarity between text and image(ViT-B/32) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) | | clip_similarity_vitl14 | float | The cosine similarity between text and image(ViT-L/14) embeddings by [OpenAI CLIP](https://github.com/openai/CLIP) | | nsfw_score_opennsfw2 | float | The NSFW score of the image by [OpenNSFW2](https://github.com/bhky/opennsfw2) | | nsfw_score_gantman | float | The NSFW score of the image by [GantMan/NSFW](https://github.com/GantMan/nsfw_model) | | watermark_score | float | The watermark probability of the image by our internal model | | aesthetic_score_laion_v2 | float | The aesthetic score of the image by [LAION-Aesthetics-Predictor-V2](https://github.com/christophschuhmann/improved-aesthetic-predictor) | ### Data Splits Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s). ## Dataset Creation ### Curation Rationale Similar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model. ### Source Data #### Initial Data Collection and Normalization We collected about 10 billion pairs of alt-text and image sources in HTML documents in [CommonCrawl](https://commoncrawl.org/) from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost. **Image Level** * Included all image formats that [Pillow library](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html) can decode. (JPEG, WEBP, PNG, BMP, ...) * Removed images less than 5KB image size. * Removed images with an aspect ratio greater than 3.0. * Removed images with min(width, height) < 200. * Removed images with a score of [OpenNSFW2](https://github.com/bhky/opennsfw2) or [GantMan/NSFW](https://github.com/GantMan/nsfw_model) higher than 0.5. * Removed all duplicate images based on the image [pHash](http://www.phash.org/) value from external public datasets. * ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M **Text Level** * Collected only English text using [cld3](https://github.com/google/cld3). * Replaced consecutive whitespace characters with a single whitespace and removed the whitespace before and after the sentence. (e.g. `"\n \n Load image into Gallery viewer, valentine&amp;#39;s day roses\n \n" โ†’ "Load image into Gallery viewer, valentine&amp;#39;s day roses"`) * Removed texts with a length of 5 or less. * Removed texts that do not have a noun form. * Removed texts with less than 3 words or more than 256 words and texts over 1000 in length. * Removed texts appearing more than 10 times. (e.g. `โ€œthumbnail forโ€, โ€œimage forโ€, โ€œpicture ofโ€`) * Removed texts containing NSFW words collected from [profanity_filter](https://github.com/rominf/profanity-filter/blob/master/profanity_filter/data/en_profane_words.txt), [better_profanity](https://github.com/snguyenthanh/better_profanity/blob/master/better_profanity/profanity_wordlist.txt), and [google_twunter_lol](https://gist.github.com/ryanlewis/a37739d710ccdb4b406d). **Image-Text Level** * Removed duplicated samples based on (image_phash, text). (Different text may exist for the same image URL.) #### Who are the source language producers? [Common Crawl](https://commoncrawl.org/) is the data source for COYO-700M. ### Annotations #### Annotation process The dataset was built in a fully automated process that did not require human annotation. #### Who are the annotators? No human annotation ### Personal and Sensitive Information #### Disclaimer & Content Warning The COYO dataset is recommended to be used for research purposes. Kakao Brain tried to construct a "Safe" dataset when building the COYO dataset. (See [Data Filtering](#source-data) Section) Kakao Brain is constantly making efforts to create more "Safe" datasets. However, despite these efforts, this large-scale dataset was not hand-picked by humans to avoid the risk due to its very large size (over 700M). Keep in mind that the unscreened nature of the dataset means that the collected images can lead to strongly discomforting and disturbing content for humans. The COYO dataset may contain some inappropriate data, and any problems resulting from such data are the full responsibility of the user who used it. Therefore, it is strongly recommended that this dataset be used only for research, keeping this in mind when using the dataset, and Kakao Brain does not recommend using this dataset as it is without special processing to clear inappropriate data to create commercial products. ## Considerations for Using the Data ### Social Impact of Dataset It will be described in a paper to be released soon. ### Discussion of Biases It will be described in a paper to be released soon. ### Other Known Limitations It will be described in a paper to be released soon. ## Additional Information ### Dataset Curators COYO dataset was released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us. [[email protected]](mailto:[email protected]) ### Licensing Information #### License The COYO dataset of Kakao Brain is licensed under [CC-BY-4.0 License](https://creativecommons.org/licenses/by/4.0/). The full license can be found in the [LICENSE.cc-by-4.0 file](./coyo-700m/blob/main/LICENSE.cc-by-4.0). The dataset includes โ€œImage URLโ€ and โ€œTextโ€ collected from various sites by analyzing Common Crawl data, an open data web crawling project. The collected data (images and text) is subject to the license to which each content belongs. #### Obligation to use While Open Source may be free to use, that does not mean it is free of obligation. To determine whether your intended use of the COYO dataset is suitable for the CC-BY-4.0 license, please consider the license guide. If you violate the license, you may be subject to legal action such as the prohibition of use or claim for damages depending on the use. ### Citation Information If you apply this dataset to any project and research, please cite our code: ``` @misc{kakaobrain2022coyo-700m, title = {COYO-700M: Image-Text Pair Dataset}, author = {Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, Saehoon Kim}, year = {2022}, howpublished = {\url{https://github.com/kakaobrain/coyo-dataset}}, } ``` ### Contributions - Minwoo Byeon ([@mwbyeon](https://github.com/mwbyeon)) - Beomhee Park ([@beomheepark](https://github.com/beomheepark)) - Haecheon Kim ([@HaecheonKim](https://github.com/HaecheonKim)) - Sungjun Lee ([@justhungryman](https://github.com/justHungryMan)) - Woonhyuk Baek ([@wbaek](https://github.com/wbaek)) - Saehoon Kim ([@saehoonkim](https://github.com/saehoonkim)) - and Kakao Brain Large-Scale AI Studio
kakaobrain/coyo-700m
[ "task_categories:text-to-image", "task_categories:image-to-text", "task_categories:zero-shot-classification", "task_ids:image-captioning", "annotations_creators:no-annotation", "language_creators:other", "multilinguality:monolingual", "size_categories:100M<n<1B", "source_datasets:original", "language:en", "license:cc-by-4.0", "image-text pairs", "arxiv:2102.05918", "arxiv:2204.06125", "arxiv:2010.11929", "region:us" ]
2022-08-25T14:54:43+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "source_datasets": ["original"], "task_categories": ["text-to-image", "image-to-text", "zero-shot-classification"], "task_ids": ["image-captioning"], "pretty_name": "COYO-700M", "tags": ["image-text pairs"]}
2022-08-30T18:07:52+00:00
[ "2102.05918", "2204.06125", "2010.11929" ]
[ "en" ]
TAGS #task_categories-text-to-image #task_categories-image-to-text #task_categories-zero-shot-classification #task_ids-image-captioning #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-cc-by-4.0 #image-text pairs #arxiv-2102.05918 #arxiv-2204.06125 #arxiv-2010.11929 #region-us
Dataset Card for COYO-700M ========================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: COYO homepage * Repository: COYO repository * Paper: * Leaderboard: * Point of Contact: COYO email ### Dataset Summary COYO-700M is a large-scale dataset that contains 747M image-text pairs as well as many other meta-attributes to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models complementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later. ### Supported Tasks and Leaderboards We empirically validated the quality of COYO dataset by re-implementing popular models such as ALIGN, unCLIP, and ViT. We trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers. Our pre-trained models and training codes will be released soon along with the technical paper. ### Languages The texts in the COYO-700M dataset consist of English. Dataset Structure ----------------- ### Data Instances Each instance in COYO-700M represents single image-text pair information with meta-attributes: ### Data Fields name: id, type: long, description: Unique 64-bit integer ID generated by monotonically\_increasing\_id() name: url, type: string, description: The image URL extracted from the 'src' attribute of the '![]()' tag name: text, type: string, description: The text extracted from the 'alt' attribute of the '![]()' tag name: width, type: integer, description: The width of the image name: height, type: integer, description: The height of the image name: image\_phash, type: string, description: The perceptual hash(pHash) of the image name: text\_length, type: integer, description: The length of the text name: word\_count, type: integer, description: The number of words separated by spaces. name: num\_tokens\_bert, type: integer, description: The number of tokens using BertTokenizer name: num\_tokens\_gpt, type: integer, description: The number of tokens using GPT2TokenizerFast name: num\_faces, type: integer, description: The number of faces in the image detected by SCRFD name: clip\_similarity\_vitb32, type: float, description: The cosine similarity between text and image(ViT-B/32) embeddings by OpenAI CLIP name: clip\_similarity\_vitl14, type: float, description: The cosine similarity between text and image(ViT-L/14) embeddings by OpenAI CLIP name: nsfw\_score\_opennsfw2, type: float, description: The NSFW score of the image by OpenNSFW2 name: nsfw\_score\_gantman, type: float, description: The NSFW score of the image by GantMan/NSFW name: watermark\_score, type: float, description: The watermark probability of the image by our internal model name: aesthetic\_score\_laion\_v2, type: float, description: The aesthetic score of the image by LAION-Aesthetics-Predictor-V2 ### Data Splits Data was not split, since the evaluation was expected to be performed on more widely used downstream task(s). Dataset Creation ---------------- ### Curation Rationale Similar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num\_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model. ### Source Data #### Initial Data Collection and Normalization We collected about 10 billion pairs of alt-text and image sources in HTML documents in CommonCrawl from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost. Image Level * Included all image formats that Pillow library can decode. (JPEG, WEBP, PNG, BMP, ...) * Removed images less than 5KB image size. * Removed images with an aspect ratio greater than 3.0. * Removed images with min(width, height) < 200. * Removed images with a score of OpenNSFW2 or GantMan/NSFW higher than 0.5. * Removed all duplicate images based on the image pHash value from external public datasets. + ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M Text Level * Collected only English text using cld3. * Replaced consecutive whitespace characters with a single whitespace and removed the whitespace before and after the sentence. (e.g. '"\n \n Load image into Gallery viewer, valentine&#39;s day roses\n \n" โ†’ "Load image into Gallery viewer, valentine&#39;s day roses"') * Removed texts with a length of 5 or less. * Removed texts that do not have a noun form. * Removed texts with less than 3 words or more than 256 words and texts over 1000 in length. * Removed texts appearing more than 10 times. (e.g. 'โ€œthumbnail forโ€, โ€œimage forโ€, โ€œpicture ofโ€') * Removed texts containing NSFW words collected from profanity\_filter, better\_profanity, and google\_twunter\_lol. Image-Text Level * Removed duplicated samples based on (image\_phash, text). (Different text may exist for the same image URL.) #### Who are the source language producers? Common Crawl is the data source for COYO-700M. ### Annotations #### Annotation process The dataset was built in a fully automated process that did not require human annotation. #### Who are the annotators? No human annotation ### Personal and Sensitive Information #### Disclaimer & Content Warning The COYO dataset is recommended to be used for research purposes. Kakao Brain tried to construct a "Safe" dataset when building the COYO dataset. (See Data Filtering Section) Kakao Brain is constantly making efforts to create more "Safe" datasets. However, despite these efforts, this large-scale dataset was not hand-picked by humans to avoid the risk due to its very large size (over 700M). Keep in mind that the unscreened nature of the dataset means that the collected images can lead to strongly discomforting and disturbing content for humans. The COYO dataset may contain some inappropriate data, and any problems resulting from such data are the full responsibility of the user who used it. Therefore, it is strongly recommended that this dataset be used only for research, keeping this in mind when using the dataset, and Kakao Brain does not recommend using this dataset as it is without special processing to clear inappropriate data to create commercial products. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset It will be described in a paper to be released soon. ### Discussion of Biases It will be described in a paper to be released soon. ### Other Known Limitations It will be described in a paper to be released soon. Additional Information ---------------------- ### Dataset Curators COYO dataset was released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us. coyo@URL ### Licensing Information #### License The COYO dataset of Kakao Brain is licensed under CC-BY-4.0 License. The full license can be found in the URL-by-4.0 file. The dataset includes โ€œImage URLโ€ and โ€œTextโ€ collected from various sites by analyzing Common Crawl data, an open data web crawling project. The collected data (images and text) is subject to the license to which each content belongs. #### Obligation to use While Open Source may be free to use, that does not mean it is free of obligation. To determine whether your intended use of the COYO dataset is suitable for the CC-BY-4.0 license, please consider the license guide. If you violate the license, you may be subject to legal action such as the prohibition of use or claim for damages depending on the use. If you apply this dataset to any project and research, please cite our code: ### Contributions * Minwoo Byeon (@mwbyeon) * Beomhee Park (@beomheepark) * Haecheon Kim (@HaecheonKim) * Sungjun Lee (@justhungryman) * Woonhyuk Baek (@wbaek) * Saehoon Kim (@saehoonkim) * and Kakao Brain Large-Scale AI Studio
[ "### Dataset Summary\n\n\nCOYO-700M is a large-scale dataset that contains 747M image-text pairs as well as many other meta-attributes to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models\ncomplementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later.", "### Supported Tasks and Leaderboards\n\n\nWe empirically validated the quality of COYO dataset by re-implementing popular models such as ALIGN, unCLIP, and ViT.\nWe trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers.\nOur pre-trained models and training codes will be released soon along with the technical paper.", "### Languages\n\n\nThe texts in the COYO-700M dataset consist of English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance in COYO-700M represents single image-text pair information with meta-attributes:", "### Data Fields\n\n\nname: id, type: long, description: Unique 64-bit integer ID generated by monotonically\\_increasing\\_id()\nname: url, type: string, description: The image URL extracted from the 'src' attribute of the '![]()' tag\nname: text, type: string, description: The text extracted from the 'alt' attribute of the '![]()' tag\nname: width, type: integer, description: The width of the image\nname: height, type: integer, description: The height of the image\nname: image\\_phash, type: string, description: The perceptual hash(pHash) of the image\nname: text\\_length, type: integer, description: The length of the text\nname: word\\_count, type: integer, description: The number of words separated by spaces.\nname: num\\_tokens\\_bert, type: integer, description: The number of tokens using BertTokenizer\nname: num\\_tokens\\_gpt, type: integer, description: The number of tokens using GPT2TokenizerFast\nname: num\\_faces, type: integer, description: The number of faces in the image detected by SCRFD\nname: clip\\_similarity\\_vitb32, type: float, description: The cosine similarity between text and image(ViT-B/32) embeddings by OpenAI CLIP\nname: clip\\_similarity\\_vitl14, type: float, description: The cosine similarity between text and image(ViT-L/14) embeddings by OpenAI CLIP\nname: nsfw\\_score\\_opennsfw2, type: float, description: The NSFW score of the image by OpenNSFW2\nname: nsfw\\_score\\_gantman, type: float, description: The NSFW score of the image by GantMan/NSFW\nname: watermark\\_score, type: float, description: The watermark probability of the image by our internal model\nname: aesthetic\\_score\\_laion\\_v2, type: float, description: The aesthetic score of the image by LAION-Aesthetics-Predictor-V2", "### Data Splits\n\n\nData was not split, since the evaluation was expected to be performed on more widely used downstream task(s).\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nSimilar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num\\_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nWe collected about 10 billion pairs of alt-text and image sources in HTML documents in CommonCrawl from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost.\n\n\nImage Level\n\n\n* Included all image formats that Pillow library can decode. (JPEG, WEBP, PNG, BMP, ...)\n* Removed images less than 5KB image size.\n* Removed images with an aspect ratio greater than 3.0.\n* Removed images with min(width, height) < 200.\n* Removed images with a score of OpenNSFW2 or GantMan/NSFW higher than 0.5.\n* Removed all duplicate images based on the image pHash value from external public datasets.\n\t+ ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M\n\n\nText Level\n\n\n* Collected only English text using cld3.\n* Replaced consecutive whitespace characters with a single whitespace and removed the whitespace before and after the sentence.\n(e.g. '\"\\n \\n Load image into Gallery viewer, valentine&#39;s day roses\\n \\n\" โ†’ \"Load image into Gallery viewer, valentine&#39;s day roses\"')\n* Removed texts with a length of 5 or less.\n* Removed texts that do not have a noun form.\n* Removed texts with less than 3 words or more than 256 words and texts over 1000 in length.\n* Removed texts appearing more than 10 times.\n(e.g. 'โ€œthumbnail forโ€, โ€œimage forโ€, โ€œpicture ofโ€')\n* Removed texts containing NSFW words collected from profanity\\_filter, better\\_profanity, and google\\_twunter\\_lol.\n\n\nImage-Text Level\n\n\n* Removed duplicated samples based on (image\\_phash, text).\n(Different text may exist for the same image URL.)", "#### Who are the source language producers?\n\n\nCommon Crawl is the data source for COYO-700M.", "### Annotations", "#### Annotation process\n\n\nThe dataset was built in a fully automated process that did not require human annotation.", "#### Who are the annotators?\n\n\nNo human annotation", "### Personal and Sensitive Information", "#### Disclaimer & Content Warning\n\n\nThe COYO dataset is recommended to be used for research purposes.\nKakao Brain tried to construct a \"Safe\" dataset when building the COYO dataset. (See Data Filtering Section) Kakao Brain is constantly making efforts to create more \"Safe\" datasets.\nHowever, despite these efforts, this large-scale dataset was not hand-picked by humans to avoid the risk due to its very large size (over 700M).\nKeep in mind that the unscreened nature of the dataset means that the collected images can lead to strongly discomforting and disturbing content for humans.\nThe COYO dataset may contain some inappropriate data, and any problems resulting from such data are the full responsibility of the user who used it.\nTherefore, it is strongly recommended that this dataset be used only for research, keeping this in mind when using the dataset, and Kakao Brain does not recommend using this dataset as it is without special processing to clear inappropriate data to create commercial products.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nIt will be described in a paper to be released soon.", "### Discussion of Biases\n\n\nIt will be described in a paper to be released soon.", "### Other Known Limitations\n\n\nIt will be described in a paper to be released soon.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nCOYO dataset was released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.\n\n\ncoyo@URL", "### Licensing Information", "#### License\n\n\nThe COYO dataset of Kakao Brain is licensed under CC-BY-4.0 License.\nThe full license can be found in the URL-by-4.0 file.\nThe dataset includes โ€œImage URLโ€ and โ€œTextโ€ collected from various sites by analyzing Common Crawl data, an open data web crawling project.\nThe collected data (images and text) is subject to the license to which each content belongs.", "#### Obligation to use\n\n\nWhile Open Source may be free to use, that does not mean it is free of obligation.\nTo determine whether your intended use of the COYO dataset is suitable for the CC-BY-4.0 license, please consider the license guide.\nIf you violate the license, you may be subject to legal action such as the prohibition of use or claim for damages depending on the use.\n\n\nIf you apply this dataset to any project and research, please cite our code:", "### Contributions\n\n\n* Minwoo Byeon (@mwbyeon)\n* Beomhee Park (@beomheepark)\n* Haecheon Kim (@HaecheonKim)\n* Sungjun Lee (@justhungryman)\n* Woonhyuk Baek (@wbaek)\n* Saehoon Kim (@saehoonkim)\n* and Kakao Brain Large-Scale AI Studio" ]
[ "TAGS\n#task_categories-text-to-image #task_categories-image-to-text #task_categories-zero-shot-classification #task_ids-image-captioning #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-cc-by-4.0 #image-text pairs #arxiv-2102.05918 #arxiv-2204.06125 #arxiv-2010.11929 #region-us \n", "### Dataset Summary\n\n\nCOYO-700M is a large-scale dataset that contains 747M image-text pairs as well as many other meta-attributes to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models\ncomplementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later.", "### Supported Tasks and Leaderboards\n\n\nWe empirically validated the quality of COYO dataset by re-implementing popular models such as ALIGN, unCLIP, and ViT.\nWe trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers.\nOur pre-trained models and training codes will be released soon along with the technical paper.", "### Languages\n\n\nThe texts in the COYO-700M dataset consist of English.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach instance in COYO-700M represents single image-text pair information with meta-attributes:", "### Data Fields\n\n\nname: id, type: long, description: Unique 64-bit integer ID generated by monotonically\\_increasing\\_id()\nname: url, type: string, description: The image URL extracted from the 'src' attribute of the '![]()' tag\nname: text, type: string, description: The text extracted from the 'alt' attribute of the '![]()' tag\nname: width, type: integer, description: The width of the image\nname: height, type: integer, description: The height of the image\nname: image\\_phash, type: string, description: The perceptual hash(pHash) of the image\nname: text\\_length, type: integer, description: The length of the text\nname: word\\_count, type: integer, description: The number of words separated by spaces.\nname: num\\_tokens\\_bert, type: integer, description: The number of tokens using BertTokenizer\nname: num\\_tokens\\_gpt, type: integer, description: The number of tokens using GPT2TokenizerFast\nname: num\\_faces, type: integer, description: The number of faces in the image detected by SCRFD\nname: clip\\_similarity\\_vitb32, type: float, description: The cosine similarity between text and image(ViT-B/32) embeddings by OpenAI CLIP\nname: clip\\_similarity\\_vitl14, type: float, description: The cosine similarity between text and image(ViT-L/14) embeddings by OpenAI CLIP\nname: nsfw\\_score\\_opennsfw2, type: float, description: The NSFW score of the image by OpenNSFW2\nname: nsfw\\_score\\_gantman, type: float, description: The NSFW score of the image by GantMan/NSFW\nname: watermark\\_score, type: float, description: The watermark probability of the image by our internal model\nname: aesthetic\\_score\\_laion\\_v2, type: float, description: The aesthetic score of the image by LAION-Aesthetics-Predictor-V2", "### Data Splits\n\n\nData was not split, since the evaluation was expected to be performed on more widely used downstream task(s).\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nSimilar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num\\_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nWe collected about 10 billion pairs of alt-text and image sources in HTML documents in CommonCrawl from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost.\n\n\nImage Level\n\n\n* Included all image formats that Pillow library can decode. (JPEG, WEBP, PNG, BMP, ...)\n* Removed images less than 5KB image size.\n* Removed images with an aspect ratio greater than 3.0.\n* Removed images with min(width, height) < 200.\n* Removed images with a score of OpenNSFW2 or GantMan/NSFW higher than 0.5.\n* Removed all duplicate images based on the image pHash value from external public datasets.\n\t+ ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M\n\n\nText Level\n\n\n* Collected only English text using cld3.\n* Replaced consecutive whitespace characters with a single whitespace and removed the whitespace before and after the sentence.\n(e.g. '\"\\n \\n Load image into Gallery viewer, valentine&#39;s day roses\\n \\n\" โ†’ \"Load image into Gallery viewer, valentine&#39;s day roses\"')\n* Removed texts with a length of 5 or less.\n* Removed texts that do not have a noun form.\n* Removed texts with less than 3 words or more than 256 words and texts over 1000 in length.\n* Removed texts appearing more than 10 times.\n(e.g. 'โ€œthumbnail forโ€, โ€œimage forโ€, โ€œpicture ofโ€')\n* Removed texts containing NSFW words collected from profanity\\_filter, better\\_profanity, and google\\_twunter\\_lol.\n\n\nImage-Text Level\n\n\n* Removed duplicated samples based on (image\\_phash, text).\n(Different text may exist for the same image URL.)", "#### Who are the source language producers?\n\n\nCommon Crawl is the data source for COYO-700M.", "### Annotations", "#### Annotation process\n\n\nThe dataset was built in a fully automated process that did not require human annotation.", "#### Who are the annotators?\n\n\nNo human annotation", "### Personal and Sensitive Information", "#### Disclaimer & Content Warning\n\n\nThe COYO dataset is recommended to be used for research purposes.\nKakao Brain tried to construct a \"Safe\" dataset when building the COYO dataset. (See Data Filtering Section) Kakao Brain is constantly making efforts to create more \"Safe\" datasets.\nHowever, despite these efforts, this large-scale dataset was not hand-picked by humans to avoid the risk due to its very large size (over 700M).\nKeep in mind that the unscreened nature of the dataset means that the collected images can lead to strongly discomforting and disturbing content for humans.\nThe COYO dataset may contain some inappropriate data, and any problems resulting from such data are the full responsibility of the user who used it.\nTherefore, it is strongly recommended that this dataset be used only for research, keeping this in mind when using the dataset, and Kakao Brain does not recommend using this dataset as it is without special processing to clear inappropriate data to create commercial products.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nIt will be described in a paper to be released soon.", "### Discussion of Biases\n\n\nIt will be described in a paper to be released soon.", "### Other Known Limitations\n\n\nIt will be described in a paper to be released soon.\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nCOYO dataset was released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.\n\n\ncoyo@URL", "### Licensing Information", "#### License\n\n\nThe COYO dataset of Kakao Brain is licensed under CC-BY-4.0 License.\nThe full license can be found in the URL-by-4.0 file.\nThe dataset includes โ€œImage URLโ€ and โ€œTextโ€ collected from various sites by analyzing Common Crawl data, an open data web crawling project.\nThe collected data (images and text) is subject to the license to which each content belongs.", "#### Obligation to use\n\n\nWhile Open Source may be free to use, that does not mean it is free of obligation.\nTo determine whether your intended use of the COYO dataset is suitable for the CC-BY-4.0 license, please consider the license guide.\nIf you violate the license, you may be subject to legal action such as the prohibition of use or claim for damages depending on the use.\n\n\nIf you apply this dataset to any project and research, please cite our code:", "### Contributions\n\n\n* Minwoo Byeon (@mwbyeon)\n* Beomhee Park (@beomheepark)\n* Haecheon Kim (@HaecheonKim)\n* Sungjun Lee (@justhungryman)\n* Woonhyuk Baek (@wbaek)\n* Saehoon Kim (@saehoonkim)\n* and Kakao Brain Large-Scale AI Studio" ]
[ 147, 133, 102, 27, 29, 539, 36, 157, 4, 457, 24, 5, 24, 13, 8, 241, 19, 20, 26, 59, 6, 96, 105, 87 ]
[ "passage: TAGS\n#task_categories-text-to-image #task_categories-image-to-text #task_categories-zero-shot-classification #task_ids-image-captioning #annotations_creators-no-annotation #language_creators-other #multilinguality-monolingual #size_categories-100M<n<1B #source_datasets-original #language-English #license-cc-by-4.0 #image-text pairs #arxiv-2102.05918 #arxiv-2204.06125 #arxiv-2010.11929 #region-us \n### Dataset Summary\n\n\nCOYO-700M is a large-scale dataset that contains 747M image-text pairs as well as many other meta-attributes to increase the usability to train various models. Our dataset follows a similar strategy to previous vision-and-language datasets, collecting many informative pairs of alt-text and its associated image in HTML documents. We expect COYO to be used to train popular large-scale foundation models\ncomplementary to other similar datasets. For more details on the data acquisition process, please refer to the technical paper to be released later.### Supported Tasks and Leaderboards\n\n\nWe empirically validated the quality of COYO dataset by re-implementing popular models such as ALIGN, unCLIP, and ViT.\nWe trained these models on COYO-700M or its subsets from scratch, achieving competitive performance to the reported numbers or generated samples in the original papers.\nOur pre-trained models and training codes will be released soon along with the technical paper.### Languages\n\n\nThe texts in the COYO-700M dataset consist of English.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nEach instance in COYO-700M represents single image-text pair information with meta-attributes:", "passage: ### Data Fields\n\n\nname: id, type: long, description: Unique 64-bit integer ID generated by monotonically\\_increasing\\_id()\nname: url, type: string, description: The image URL extracted from the 'src' attribute of the '![]()' tag\nname: text, type: string, description: The text extracted from the 'alt' attribute of the '![]()' tag\nname: width, type: integer, description: The width of the image\nname: height, type: integer, description: The height of the image\nname: image\\_phash, type: string, description: The perceptual hash(pHash) of the image\nname: text\\_length, type: integer, description: The length of the text\nname: word\\_count, type: integer, description: The number of words separated by spaces.\nname: num\\_tokens\\_bert, type: integer, description: The number of tokens using BertTokenizer\nname: num\\_tokens\\_gpt, type: integer, description: The number of tokens using GPT2TokenizerFast\nname: num\\_faces, type: integer, description: The number of faces in the image detected by SCRFD\nname: clip\\_similarity\\_vitb32, type: float, description: The cosine similarity between text and image(ViT-B/32) embeddings by OpenAI CLIP\nname: clip\\_similarity\\_vitl14, type: float, description: The cosine similarity between text and image(ViT-L/14) embeddings by OpenAI CLIP\nname: nsfw\\_score\\_opennsfw2, type: float, description: The NSFW score of the image by OpenNSFW2\nname: nsfw\\_score\\_gantman, type: float, description: The NSFW score of the image by GantMan/NSFW\nname: watermark\\_score, type: float, description: The watermark probability of the image by our internal model\nname: aesthetic\\_score\\_laion\\_v2, type: float, description: The aesthetic score of the image by LAION-Aesthetics-Predictor-V2### Data Splits\n\n\nData was not split, since the evaluation was expected to be performed on more widely used downstream task(s).\n\n\nDataset Creation\n----------------### Curation Rationale\n\n\nSimilar to most vision-and-language datasets, our primary goal in the data creation process is to collect many pairs of alt-text and image sources in HTML documents crawled from the web. Therefore, We attempted to eliminate uninformative images or texts with minimal cost and improve our dataset's usability by adding various meta-attributes. Users can use these meta-attributes to sample a subset from COYO-700M and use it to train the desired model. For instance, the *num\\_faces* attribute could be used to make a subset like *COYO-Faces* and develop a privacy-preserving generative model.### Source Data", "passage: #### Initial Data Collection and Normalization\n\n\nWe collected about 10 billion pairs of alt-text and image sources in HTML documents in CommonCrawl from Oct. 2020 to Aug. 2021. and eliminated uninformative pairs through the image and/or text level filtering process with minimal cost.\n\n\nImage Level\n\n\n* Included all image formats that Pillow library can decode. (JPEG, WEBP, PNG, BMP, ...)\n* Removed images less than 5KB image size.\n* Removed images with an aspect ratio greater than 3.0.\n* Removed images with min(width, height) < 200.\n* Removed images with a score of OpenNSFW2 or GantMan/NSFW higher than 0.5.\n* Removed all duplicate images based on the image pHash value from external public datasets.\n\t+ ImageNet-1K/21K, Flickr-30K, MS-COCO, CC-3M, CC-12M\n\n\nText Level\n\n\n* Collected only English text using cld3.\n* Replaced consecutive whitespace characters with a single whitespace and removed the whitespace before and after the sentence.\n(e.g. '\"\\n \\n Load image into Gallery viewer, valentine&#39;s day roses\\n \\n\" โ†’ \"Load image into Gallery viewer, valentine&#39;s day roses\"')\n* Removed texts with a length of 5 or less.\n* Removed texts that do not have a noun form.\n* Removed texts with less than 3 words or more than 256 words and texts over 1000 in length.\n* Removed texts appearing more than 10 times.\n(e.g. 'โ€œthumbnail forโ€, โ€œimage forโ€, โ€œpicture ofโ€')\n* Removed texts containing NSFW words collected from profanity\\_filter, better\\_profanity, and google\\_twunter\\_lol.\n\n\nImage-Text Level\n\n\n* Removed duplicated samples based on (image\\_phash, text).\n(Different text may exist for the same image URL.)#### Who are the source language producers?\n\n\nCommon Crawl is the data source for COYO-700M.### Annotations#### Annotation process\n\n\nThe dataset was built in a fully automated process that did not require human annotation.#### Who are the annotators?\n\n\nNo human annotation### Personal and Sensitive Information#### Disclaimer & Content Warning\n\n\nThe COYO dataset is recommended to be used for research purposes.\nKakao Brain tried to construct a \"Safe\" dataset when building the COYO dataset. (See Data Filtering Section) Kakao Brain is constantly making efforts to create more \"Safe\" datasets.\nHowever, despite these efforts, this large-scale dataset was not hand-picked by humans to avoid the risk due to its very large size (over 700M).\nKeep in mind that the unscreened nature of the dataset means that the collected images can lead to strongly discomforting and disturbing content for humans.\nThe COYO dataset may contain some inappropriate data, and any problems resulting from such data are the full responsibility of the user who used it.\nTherefore, it is strongly recommended that this dataset be used only for research, keeping this in mind when using the dataset, and Kakao Brain does not recommend using this dataset as it is without special processing to clear inappropriate data to create commercial products.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nIt will be described in a paper to be released soon.### Discussion of Biases\n\n\nIt will be described in a paper to be released soon.### Other Known Limitations\n\n\nIt will be described in a paper to be released soon.\n\n\nAdditional Information\n----------------------### Dataset Curators\n\n\nCOYO dataset was released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.\n\n\ncoyo@URL### Licensing Information" ]
60eceef746f537c1efe46ffd2d5485d631a9c9d8
Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models. ``` x_res = 256 y_res = 256 sample_rate = 22050 n_fft = 2048 hop_length = 512 ```
teticio/audio-diffusion-256
[ "task_categories:image-to-image", "size_categories:10K<n<100K", "audio", "spectrograms", "region:us" ]
2022-08-25T16:32:42+00:00
{"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-to-image"], "task_ids": [], "pretty_name": "Mel spectrograms of music", "tags": ["audio", "spectrograms"]}
2022-11-09T10:49:48+00:00
[]
[]
TAGS #task_categories-image-to-image #size_categories-10K<n<100K #audio #spectrograms #region-us
Over 20,000 256x256 mel spectrograms of 5 second samples of music from my Spotify liked playlist. The code to convert from audio to spectrogram and vice versa can be found in URL along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
[]
[ "TAGS\n#task_categories-image-to-image #size_categories-10K<n<100K #audio #spectrograms #region-us \n" ]
[ 38 ]
[ "passage: TAGS\n#task_categories-image-to-image #size_categories-10K<n<100K #audio #spectrograms #region-us \n" ]
7e3aa1657134d5747ab9a1ab21afaf0666d811e9
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `related_work` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==4` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5482 | 0.2243 | 0.1578 | 0.2689 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5476 | 0.2209 | 0.1592 | 0.2650 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.548 | 0.2272 | 0.1611 | 0.2704 |
allenai/multixscience_sparse_mean
[ "task_categories:summarization", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-08-25T21:58:26+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "paperswithcode_id": "multi-xscience", "pretty_name": "Multi-XScience"}
2022-11-24T16:48:30+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
This is a copy of the Multi-XScience dataset, except the input source documents of its 'test' split have been replaced by a **sparse** retriever. The retrieval pipeline used: * **query**: The 'related\_work' field of each example * **corpus**: The union of all documents in the 'train', 'validation' and 'test' splits * **retriever**: BM25 via PyTerrier with default settings * **top-k strategy**: '"mean"', i.e. the number of documents retrieved, 'k', is set as the mean number of documents seen across examples in this dataset, in this case 'k==4' Retrieval results on the 'train' set: Retrieval results on the 'validation' set: Retrieval results on the 'test' set:
[]
[ "TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n" ]
[ 73 ]
[ "passage: TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n" ]
59efc38ee73602367aa6f642820990b0175cb90f
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `related_work` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==20` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5482 | 0.2243 | 0.0547 | 0.4063 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5476 | 0.2209 | 0.0553 | 0.4026 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.5480 | 0.2272 | 0.055 | 0.4039 |
allenai/multixscience_sparse_max
[ "task_categories:summarization", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:unknown", "region:us" ]
2022-08-25T22:00:00+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "paperswithcode_id": "multi-xscience", "pretty_name": "Multi-XScience"}
2022-11-24T16:36:31+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us
This is a copy of the Multi-XScience dataset, except the input source documents of its 'test' split have been replaced by a **sparse** retriever. The retrieval pipeline used: * **query**: The 'related\_work' field of each example * **corpus**: The union of all documents in the 'train', 'validation' and 'test' splits * **retriever**: BM25 via PyTerrier with default settings * **top-k strategy**: '"max"', i.e. the number of documents retrieved, 'k', is set as the maximum number of documents seen across examples in this dataset, in this case 'k==20' Retrieval results on the 'train' set: Retrieval results on the 'validation' set: Retrieval results on the 'test' set:
[]
[ "TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n" ]
[ 73 ]
[ "passage: TAGS\n#task_categories-summarization #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-unknown #region-us \n" ]
6b16a554b543b30d49252e1b64b736716a107cd3
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@angelolab](https://github.com/angelolab) for adding this dataset.
angelolab/ark_example
[ "task_categories:image-segmentation", "task_ids:instance-segmentation", "annotations_creators:no-annotation", "size_categories:n<1K", "source_datasets:original", "license:apache-2.0", "MIBI", "Multiplexed-Imaging", "region:us" ]
2022-08-25T22:15:17+00:00
{"annotations_creators": ["no-annotation"], "language_creators": [], "language": [], "license": ["apache-2.0"], "multilinguality": [], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["image-segmentation"], "task_ids": ["instance-segmentation"], "pretty_name": "An example dataset for analyzing multiplexed imaging data.", "tags": ["MIBI", "Multiplexed-Imaging"]}
2023-11-28T20:05:52+00:00
[]
[]
TAGS #task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-no-annotation #size_categories-n<1K #source_datasets-original #license-apache-2.0 #MIBI #Multiplexed-Imaging #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @angelolab for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @angelolab for adding this dataset." ]
[ "TAGS\n#task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-no-annotation #size_categories-n<1K #source_datasets-original #license-apache-2.0 #MIBI #Multiplexed-Imaging #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @angelolab for adding this dataset." ]
[ 80, 10, 125, 24, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 17 ]
[ "passage: TAGS\n#task_categories-image-segmentation #task_ids-instance-segmentation #annotations_creators-no-annotation #size_categories-n<1K #source_datasets-original #license-apache-2.0 #MIBI #Multiplexed-Imaging #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\n\nThanks to @angelolab for adding this dataset." ]
f9ad319b1eb78b0af0b1c8f5dc951c3092d6ee9c
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
merkalo-ziri/qa_shreded
[ "task_categories:question-answering", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:1K<n<10K", "source_datasets:original", "language:rus", "license:other", "region:us" ]
2022-08-26T00:25:51+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["rus"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "qa_main", "tags": []}
2022-08-26T00:27:18+00:00
[]
[ "rus" ]
TAGS #task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Russian #license-other #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Russian #license-other #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\nThanks to @github-username for adding this dataset." ]
[ 74, 10, 125, 24, 6, 10, 4, 6, 6, 5, 5, 5, 7, 4, 10, 10, 5, 5, 9, 8, 8, 7, 8, 7, 5, 6, 6, 19 ]
[ "passage: TAGS\n#task_categories-question-answering #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Russian #license-other #region-us \n# Dataset Card for [Dataset Name]## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions\nThanks to @github-username for adding this dataset." ]
227e4266899d746172ebd46f90e26af2d370f799
# Gameplay Images ## Dataset Description - **Homepage:** [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images) - **Download Size** 2.50 GiB - **Generated Size** 1.68 GiB - **Total Size** 4.19 GiB A dataset from [kaggle](https://www.kaggle.com/datasets/aditmagotra/gameplay-images). This is a dataset of 10 very famous video games in the world. These include - Among Us - Apex Legends - Fortnite - Forza Horizon - Free Fire - Genshin Impact - God of War - Minecraft - Roblox - Terraria There are 1000 images per class and all are sized `640 x 360`. They are in the `.png` format. This Dataset was made by saving frames every few seconds from famous gameplay videos on Youtube. โ€ป This dataset was uploaded in January 2022. Game content updated after that will not be included. ### License CC-BY-4.0 ## Dataset Structure ### Data Instance ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/Gameplay_Images") DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 10000 }) }) ``` ```python >>> dataset["train"].features {'image': Image(decode=True, id=None), 'label': ClassLabel(num_classes=10, names=['Among Us', 'Apex Legends', 'Fortnite', 'Forza Horizon', 'Free Fire', 'Genshin Impact', 'God of War', 'Minecraft', 'Roblox', 'Terraria'], id=None)} ``` ### Data Size download: 2.50 GiB<br> generated: 1.68 GiB<br> total: 4.19 GiB ### Data Fields - image: `Image` - A `PIL.Image.Image object` containing the image. size=640x360 - Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. `dataset[0]["image"]` should always be preferred over `dataset["image"][0]`. - label: an int classification label. Class Label Mappings: ```json { "Among Us": 0, "Apex Legends": 1, "Fortnite": 2, "Forza Horizon": 3, "Free Fire": 4, "Genshin Impact": 5, "God of War": 6, "Minecraft": 7, "Roblox": 8, "Terraria": 9 } ``` ```python >>> dataset["train"][0] {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=640x360>, 'label': 0} ``` ### Data Splits | | train | | ---------- | -------- | | # of data | 10000 | ### Note #### train_test_split ```python >>> ds_new = dataset["train"].train_test_split(0.2, seed=42, stratify_by_column="label") >>> ds_new DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 8000 }) test: Dataset({ features: ['image', 'label'], num_rows: 2000 }) }) ```
Bingsu/Gameplay_Images
[ "task_categories:image-classification", "multilinguality:monolingual", "size_categories:1K<n<10K", "language:en", "license:cc-by-4.0", "region:us" ]
2022-08-26T03:42:10+00:00
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "pretty_name": "Gameplay Images"}
2022-08-26T04:31:58+00:00
[]
[ "en" ]
TAGS #task_categories-image-classification #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #region-us
Gameplay Images =============== Dataset Description ------------------- * Homepage: kaggle * Download Size 2.50 GiB * Generated Size 1.68 GiB * Total Size 4.19 GiB A dataset from kaggle. This is a dataset of 10 very famous video games in the world. These include * Among Us * Apex Legends * Fortnite * Forza Horizon * Free Fire * Genshin Impact * God of War * Minecraft * Roblox * Terraria There are 1000 images per class and all are sized '640 x 360'. They are in the '.png' format. This Dataset was made by saving frames every few seconds from famous gameplay videos on Youtube. โ€ป This dataset was uploaded in January 2022. Game content updated after that will not be included. ### License CC-BY-4.0 Dataset Structure ----------------- ### Data Instance ### Data Size download: 2.50 GiB generated: 1.68 GiB total: 4.19 GiB ### Data Fields * image: 'Image' + A 'PIL.Image.Image object' containing the image. size=640x360 + Note that when accessing the image column: 'dataset[0]["image"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the "image" column, i.e. 'dataset[0]["image"]' should always be preferred over 'dataset["image"][0]'. * label: an int classification label. Class Label Mappings: ### Data Splits ### Note #### train\_test\_split
[ "### License\n\n\nCC-BY-4.0\n\n\nDataset Structure\n-----------------", "### Data Instance", "### Data Size\n\n\ndownload: 2.50 GiB \n\ngenerated: 1.68 GiB \n\ntotal: 4.19 GiB", "### Data Fields\n\n\n* image: 'Image'\n\t+ A 'PIL.Image.Image object' containing the image. size=640x360\n\t+ Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* label: an int classification label.\n\n\nClass Label Mappings:", "### Data Splits", "### Note", "#### train\\_test\\_split" ]
[ "TAGS\n#task_categories-image-classification #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #region-us \n", "### License\n\n\nCC-BY-4.0\n\n\nDataset Structure\n-----------------", "### Data Instance", "### Data Size\n\n\ndownload: 2.50 GiB \n\ngenerated: 1.68 GiB \n\ntotal: 4.19 GiB", "### Data Fields\n\n\n* image: 'Image'\n\t+ A 'PIL.Image.Image object' containing the image. size=640x360\n\t+ Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* label: an int classification label.\n\n\nClass Label Mappings:", "### Data Splits", "### Note", "#### train\\_test\\_split" ]
[ 50, 15, 5, 23, 151, 5, 3, 10 ]
[ "passage: TAGS\n#task_categories-image-classification #multilinguality-monolingual #size_categories-1K<n<10K #language-English #license-cc-by-4.0 #region-us \n### License\n\n\nCC-BY-4.0\n\n\nDataset Structure\n-----------------### Data Instance### Data Size\n\n\ndownload: 2.50 GiB \n\ngenerated: 1.68 GiB \n\ntotal: 4.19 GiB### Data Fields\n\n\n* image: 'Image'\n\t+ A 'PIL.Image.Image object' containing the image. size=640x360\n\t+ Note that when accessing the image column: 'dataset[0][\"image\"]' the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the \"image\" column, i.e. 'dataset[0][\"image\"]' should always be preferred over 'dataset[\"image\"][0]'.\n* label: an int classification label.\n\n\nClass Label Mappings:### Data Splits### Note#### train\\_test\\_split" ]
863991fde636390a0678f092906ca0bbabdd8566
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@hgoyal194](https://huggingface.co/hgoyal194) for evaluating this model.
autoevaluate/autoeval-eval-project-samsum-61336320-1319050351
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T06:15:36+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-xsum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-08-26T06:18:03+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: facebook/bart-large-xsum * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @hgoyal194 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-xsum\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @hgoyal194 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-xsum\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @hgoyal194 for evaluating this model." ]
[ 13, 85, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-xsum\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @hgoyal194 for evaluating this model." ]
804e9f8472494d582f9f6abd3c95ca92036513a5
## MEDIQUA2012-MAS task source data is available [here](https://github.com/abachaa/MEDIQA2021/tree/main/Task2) des: 1. data features Multiple Answer Summarization with: * key: key of each question * question: question * text: merge all text of all answers (if it is train-split, a merge of article and section part) * sum\_abs: abstractive multiple answer summarization * sum\_ext: extractive multiple answer summarization 2. train\_article / train\_sec Same structure with train, but: * train: text: merge all text of all answers (if it is train-split, a merge of article and section part) * train\_article: text is a merge of all subanswers 's articles * train\_sec: text is a merge of all subanswers 's sections
nbtpj/bionlp2021MAS
[ "license:afl-3.0", "region:us" ]
2022-08-26T07:52:54+00:00
{"license": "afl-3.0"}
2022-08-27T14:37:33+00:00
[]
[]
TAGS #license-afl-3.0 #region-us
## MEDIQUA2012-MAS task source data is available here des: 1. data features Multiple Answer Summarization with: * key: key of each question * question: question * text: merge all text of all answers (if it is train-split, a merge of article and section part) * sum\_abs: abstractive multiple answer summarization * sum\_ext: extractive multiple answer summarization 2. train\_article / train\_sec Same structure with train, but: * train: text: merge all text of all answers (if it is train-split, a merge of article and section part) * train\_article: text is a merge of all subanswers 's articles * train\_sec: text is a merge of all subanswers 's sections
[ "## MEDIQUA2012-MAS task\n\nsource data is available here\n\ndes:\n\n1. data features\n\nMultiple Answer Summarization with:\n\n* key: key of each question\n* question: question\n* text: merge all text of all answers (if it is train-split, a merge of article and section part)\n* sum\\_abs: abstractive multiple answer summarization \n* sum\\_ext: extractive multiple answer summarization \n\n2. train\\_article / train\\_sec\n\nSame structure with train, but:\n\n* train: text: merge all text of all answers (if it is train-split, a merge of article and section part)\n* train\\_article: text is a merge of all subanswers 's articles\n* train\\_sec: text is a merge of all subanswers 's sections" ]
[ "TAGS\n#license-afl-3.0 #region-us \n", "## MEDIQUA2012-MAS task\n\nsource data is available here\n\ndes:\n\n1. data features\n\nMultiple Answer Summarization with:\n\n* key: key of each question\n* question: question\n* text: merge all text of all answers (if it is train-split, a merge of article and section part)\n* sum\\_abs: abstractive multiple answer summarization \n* sum\\_ext: extractive multiple answer summarization \n\n2. train\\_article / train\\_sec\n\nSame structure with train, but:\n\n* train: text: merge all text of all answers (if it is train-split, a merge of article and section part)\n* train\\_article: text is a merge of all subanswers 's articles\n* train\\_sec: text is a merge of all subanswers 's sections" ]
[ 14, 174 ]
[ "passage: TAGS\n#license-afl-3.0 #region-us \n## MEDIQUA2012-MAS task\n\nsource data is available here\n\ndes:\n\n1. data features\n\nMultiple Answer Summarization with:\n\n* key: key of each question\n* question: question\n* text: merge all text of all answers (if it is train-split, a merge of article and section part)\n* sum\\_abs: abstractive multiple answer summarization \n* sum\\_ext: extractive multiple answer summarization \n\n2. train\\_article / train\\_sec\n\nSame structure with train, but:\n\n* train: text: merge all text of all answers (if it is train-split, a merge of article and section part)\n* train\\_article: text is a merge of all subanswers 's articles\n* train\\_sec: text is a merge of all subanswers 's sections" ]
1aa5ac59eca5b4a5922cd999d83188ee40237277
# CLIP-BERT training data This data was used to train the CLIP-BERT model first described in [this paper](https://arxiv.org/abs/2109.11321). The dataset is based on text and images from MS COCO, SBU Captions, Visual Genome QA and Conceptual Captions. The image features have been extracted using the CLIP model [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) available on Huggingface.
Lo/clip-bert-data
[ "multilinguality:monolingual", "language:en", "license:cc-by-4.0", "arxiv:2109.11321", "region:us" ]
2022-08-26T07:57:24+00:00
{"language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"]}
2022-08-29T06:51:51+00:00
[ "2109.11321" ]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-cc-by-4.0 #arxiv-2109.11321 #region-us
# CLIP-BERT training data This data was used to train the CLIP-BERT model first described in this paper. The dataset is based on text and images from MS COCO, SBU Captions, Visual Genome QA and Conceptual Captions. The image features have been extracted using the CLIP model openai/clip-vit-base-patch32 available on Huggingface.
[ "# CLIP-BERT training data\n\nThis data was used to train the CLIP-BERT model first described in this paper. \n\nThe dataset is based on text and images from MS COCO, SBU Captions, Visual Genome QA and Conceptual Captions.\n\nThe image features have been extracted using the CLIP model openai/clip-vit-base-patch32 available on Huggingface." ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #arxiv-2109.11321 #region-us \n", "# CLIP-BERT training data\n\nThis data was used to train the CLIP-BERT model first described in this paper. \n\nThe dataset is based on text and images from MS COCO, SBU Captions, Visual Genome QA and Conceptual Captions.\n\nThe image features have been extracted using the CLIP model openai/clip-vit-base-patch32 available on Huggingface." ]
[ 35, 87 ]
[ "passage: TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #arxiv-2109.11321 #region-us \n# CLIP-BERT training data\n\nThis data was used to train the CLIP-BERT model first described in this paper. \n\nThe dataset is based on text and images from MS COCO, SBU Captions, Visual Genome QA and Conceptual Captions.\n\nThe image features have been extracted using the CLIP model openai/clip-vit-base-patch32 available on Huggingface." ]
bc8abd0b59c26ab913464fb535e080c27dce15ff
The Wikipedia train data used to train BERT-base baselines and adapt vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the "20200501.en" revision of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) on Huggingface.
Lo/adapt-pre-trained-VL-models-to-text-data-Wikipedia
[ "multilinguality:monolingual", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2022-08-26T08:06:59+00:00
{"language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"]}
2022-08-29T07:26:22+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us
The Wikipedia train data used to train BERT-base baselines and adapt vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the "URL" revision of the wikipedia dataset on Huggingface.
[]
[ "TAGS\n#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us \n" ]
[ 29 ]
[ "passage: TAGS\n#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us \n" ]
9006ce5811a9c44f8435dd489af9d18205f98a1d
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-2d469b4f-13675887
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T08:18:16+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-26T08:18:42+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
78443d7167a2047753c11a3c595f95eeb0503c0d
This repository contains archives (zip files) for ShapeNetSem, a subset of [ShapeNet](https://shapenet.org) richly annotated with physical attributes. Please see [DATA.md](DATA.md) for details about the data. If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions. If you use this data, please cite the main ShapeNet technical report and the "Semantically-enriched 3D Models for Common-sense Knowledge" workshop paper. ``` @techreport{shapenet2015, title = {{ShapeNet: An Information-Rich 3D Model Repository}}, author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher}, number = {arXiv:1512.03012 [cs.GR]}, institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago}, year = {2015} } @article{savva2015semgeo, title={{Semantically-Enriched 3D Models for Common-sense Knowledge}}, author={Manolis Savva and Angel X. Chang and Pat Hanrahan}, journal = {CVPR 2015 Workshop on Functionality, Physics, Intentionality and Causality}, year = {2015} } ``` For more information, please contact us at [email protected] and indicate ShapeNetSem in the title of your email.
ShapeNet/ShapeNetSem-archive
[ "language:en", "license:other", "3D shapes", "region:us" ]
2022-08-26T08:34:36+00:00
{"language": ["en"], "license": "other", "pretty_name": "ShapeNetSem", "tags": ["3D shapes"], "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_prompt": "To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the name of the **school or company** that you are affiliated with (the **Affiliation** field). After requesting access to this ShapeNet repo, you will be considered for access approval. \n\nAfter access approval, you (the \"Researcher\") receive permission to use the ShapeNet database (the \"Database\") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions: Researcher shall use the Database only for non-commercial research and educational purposes. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. The law of the State of New Jersey shall apply to all disputes under this agreement.\n\nFor access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with. Please actually fill out the fields (DO NOT put the word \"Advisor\" for PI/Advisor and the word \"School\" for \"Affiliation\", please specify the name of your advisor and the name of your school).", "extra_gated_fields": {"Name": "text", "PI/Advisor": "text", "Affiliation": "text", "Purpose": "text", "Country": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}}
2023-09-20T13:59:59+00:00
[]
[ "en" ]
TAGS #language-English #license-other #3D shapes #region-us
This repository contains archives (zip files) for ShapeNetSem, a subset of ShapeNet richly annotated with physical attributes. Please see URL for details about the data. If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions. If you use this data, please cite the main ShapeNet technical report and the "Semantically-enriched 3D Models for Common-sense Knowledge" workshop paper. For more information, please contact us at shapenetwebmaster@URL and indicate ShapeNetSem in the title of your email.
[]
[ "TAGS\n#language-English #license-other #3D shapes #region-us \n" ]
[ 19 ]
[ "passage: TAGS\n#language-English #license-other #3D shapes #region-us \n" ]
0efb24cbe6828a85771a28335c5f7b5626514d9b
This repository contains ShapeNetCore (v2), a subset of [ShapeNet](https://shapenet.org). ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in [WordNet 3.0](https://wordnet.princeton.edu/). Please see [DATA.md](DATA.md) for details about the data. If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions. If you use this data, please cite the main ShapeNet technical report. ``` @techreport{shapenet2015, title = {{ShapeNet: An Information-Rich 3D Model Repository}}, author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher}, number = {arXiv:1512.03012 [cs.GR]}, institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago}, year = {2015} } ``` For more information, please contact us at [email protected] and indicate ShapeNetCore v2 in the title of your email.
ShapeNet/ShapeNetCore
[ "language:en", "license:other", "3D shapes", "region:us" ]
2022-08-26T08:34:57+00:00
{"language": ["en"], "license": "other", "pretty_name": "ShapeNetCore", "tags": ["3D shapes"], "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_prompt": "To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the **school or company** that you are affiliated with (the **Affiliation** field). After requesting access to this ShapeNet repo, you will be considered for access approval. \n\nAfter access approval, you (the \"Researcher\") receive permission to use the ShapeNet database (the \"Database\") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions: Researcher shall use the Database only for non-commercial research and educational purposes. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. The law of the State of New Jersey shall apply to all disputes under this agreement.\n\nFor access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with. Please actually fill out the fields (DO NOT put the word \"Advisor\" for PI/Advisor and the word \"School\" for \"Affiliation\", please specify the name of your advisor and the name of your school).", "extra_gated_fields": {"Name": "text", "PI/Advisor": "text", "Affiliation": "text", "Purpose": "text", "Country": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}}
2023-09-20T14:05:48+00:00
[]
[ "en" ]
TAGS #language-English #license-other #3D shapes #region-us
This repository contains ShapeNetCore (v2), a subset of ShapeNet. ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in WordNet 3.0. Please see URL for details about the data. If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions. If you use this data, please cite the main ShapeNet technical report. For more information, please contact us at shapenetwebmaster@URL and indicate ShapeNetCore v2 in the title of your email.
[]
[ "TAGS\n#language-English #license-other #3D shapes #region-us \n" ]
[ 19 ]
[ "passage: TAGS\n#language-English #license-other #3D shapes #region-us \n" ]
161773d6bbc56e44575c2c3fe2eb367531843818
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-ed9fef1a-13685888
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T08:37:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-26T08:38:16+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
9d1adbcfd839d250e57ba00f5626c2a9bc2ba7b6
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-a7ced70d-13715889
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T08:52:03+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-26T08:52:29+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
41b13853d318d8f2aac4db268055ab7c99d27d9f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-1d3a2bc7-13735890
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T09:08:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-26T09:08:48+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: autoevaluate/multi-class-classification * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
6ab186192e317f65fb9f28127827c3b6a5001f30
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: josmunpen/mt5-small-spanish-summarization * Dataset: LeoCordoba/CC-NEWS-ES-titles * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@LeoCordoba](https://huggingface.co/LeoCordoba) for evaluating this model.
autoevaluate/autoeval-eval-project-LeoCordoba__CC-NEWS-ES-titles-0e1ed2c7-1320150403
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T10:35:30+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["LeoCordoba/CC-NEWS-ES-titles"], "eval_info": {"task": "summarization", "model": "josmunpen/mt5-small-spanish-summarization", "metrics": [], "dataset_name": "LeoCordoba/CC-NEWS-ES-titles", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "output_text"}}}
2022-08-26T10:42:03+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: josmunpen/mt5-small-spanish-summarization * Dataset: LeoCordoba/CC-NEWS-ES-titles * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @LeoCordoba for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: josmunpen/mt5-small-spanish-summarization\n* Dataset: LeoCordoba/CC-NEWS-ES-titles\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @LeoCordoba for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: josmunpen/mt5-small-spanish-summarization\n* Dataset: LeoCordoba/CC-NEWS-ES-titles\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @LeoCordoba for evaluating this model." ]
[ 13, 102, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: josmunpen/mt5-small-spanish-summarization\n* Dataset: LeoCordoba/CC-NEWS-ES-titles\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @LeoCordoba for evaluating this model." ]
f8135894035cb2881d24390353fbf528fe3dc906
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: LeoCordoba/mt5-small-cc-news-es-titles * Dataset: LeoCordoba/CC-NEWS-ES-titles * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@LeoCordoba](https://huggingface.co/LeoCordoba) for evaluating this model.
autoevaluate/autoeval-eval-project-LeoCordoba__CC-NEWS-ES-titles-0e1ed2c7-1320150404
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T10:35:36+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["LeoCordoba/CC-NEWS-ES-titles"], "eval_info": {"task": "summarization", "model": "LeoCordoba/mt5-small-cc-news-es-titles", "metrics": [], "dataset_name": "LeoCordoba/CC-NEWS-ES-titles", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "output_text"}}}
2022-08-26T10:42:07+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: LeoCordoba/mt5-small-cc-news-es-titles * Dataset: LeoCordoba/CC-NEWS-ES-titles * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @LeoCordoba for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: LeoCordoba/mt5-small-cc-news-es-titles\n* Dataset: LeoCordoba/CC-NEWS-ES-titles\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @LeoCordoba for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: LeoCordoba/mt5-small-cc-news-es-titles\n* Dataset: LeoCordoba/CC-NEWS-ES-titles\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @LeoCordoba for evaluating this model." ]
[ 13, 105, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: LeoCordoba/mt5-small-cc-news-es-titles\n* Dataset: LeoCordoba/CC-NEWS-ES-titles\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @LeoCordoba for evaluating this model." ]
6d228ace568d2c1de21d663452f1c25938774286
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-b541c518-13705892
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T12:01:23+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-26T12:03:38+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
5261fdbd27f9caf2abd70fdb48963c829ef7c00e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-30a8951e-13725893
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T12:01:26+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-26T12:03:44+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
9cc1c7b8d9200c633fb1fdb3870ee18a43bcbc26
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-08ca88d1-13695891
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T12:01:44+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-26T12:04:02+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
300aa70d0b8680b78f26487f34738c3ad25d20de
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Token Classification * Model: autoevaluate/entity-extraction * Dataset: conll2003 * Config: conll2003 * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-conll2003-90a08c43-13745894
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T12:01:58+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["conll2003"], "eval_info": {"task": "entity_extraction", "model": "autoevaluate/entity-extraction", "metrics": [], "dataset_name": "conll2003", "dataset_config": "conll2003", "dataset_split": "test", "col_mapping": {"tokens": "tokens", "tags": "ner_tags"}}}
2022-08-26T12:03:03+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Token Classification * Model: autoevaluate/entity-extraction * Dataset: conll2003 * Config: conll2003 * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: autoevaluate/entity-extraction\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: autoevaluate/entity-extraction\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Token Classification\n* Model: autoevaluate/entity-extraction\n* Dataset: conll2003\n* Config: conll2003\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
e897197576f659a384e06cdf1586482fa76efc87
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-884b60f3-13755895
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T12:13:37+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-26T12:15:55+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
2e4b287dda99722789449ed901e31a6b153d7739
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Image Classification * Model: abhishek/autotrain-dog-vs-food * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775897
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T13:54:51+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sasha/dog-food"], "eval_info": {"task": "image_binary_classification", "model": "abhishek/autotrain-dog-vs-food", "metrics": ["matthews_correlation"], "dataset_name": "sasha/dog-food", "dataset_config": "sasha--dog-food", "dataset_split": "train", "col_mapping": {"image": "image", "target": "label"}}}
2022-08-26T13:55:53+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Image Classification * Model: abhishek/autotrain-dog-vs-food * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ahmetgunduz for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: abhishek/autotrain-dog-vs-food\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: abhishek/autotrain-dog-vs-food\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ 13, 101, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: abhishek/autotrain-dog-vs-food\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
5cdc512c0c73bde43a077497e24fc006f149b377
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Image Classification * Model: sasha/dog-food-swin-tiny-patch4-window7-224 * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775898
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T13:54:55+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sasha/dog-food"], "eval_info": {"task": "image_binary_classification", "model": "sasha/dog-food-swin-tiny-patch4-window7-224", "metrics": ["matthews_correlation"], "dataset_name": "sasha/dog-food", "dataset_config": "sasha--dog-food", "dataset_split": "train", "col_mapping": {"image": "image", "target": "label"}}}
2022-08-26T13:55:52+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Image Classification * Model: sasha/dog-food-swin-tiny-patch4-window7-224 * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ahmetgunduz for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: sasha/dog-food-swin-tiny-patch4-window7-224\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: sasha/dog-food-swin-tiny-patch4-window7-224\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ 13, 107, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: sasha/dog-food-swin-tiny-patch4-window7-224\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
f3ce6b224624d2dbb8fc7ba79ddddc4eb102c89e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Image Classification * Model: sasha/dog-food-convnext-tiny-224 * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775899
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T13:55:02+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sasha/dog-food"], "eval_info": {"task": "image_binary_classification", "model": "sasha/dog-food-convnext-tiny-224", "metrics": ["matthews_correlation"], "dataset_name": "sasha/dog-food", "dataset_config": "sasha--dog-food", "dataset_split": "train", "col_mapping": {"image": "image", "target": "label"}}}
2022-08-26T13:55:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Image Classification * Model: sasha/dog-food-convnext-tiny-224 * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ahmetgunduz for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: sasha/dog-food-convnext-tiny-224\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: sasha/dog-food-convnext-tiny-224\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ 13, 101, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: sasha/dog-food-convnext-tiny-224\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
5348159e41b3268f6acbd0fb8f548e2fcaa81dca
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Image Classification * Model: sasha/dog-food-vit-base-patch16-224-in21k * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-sasha__dog-food-8a6c4abe-13775900
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T13:55:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["sasha/dog-food"], "eval_info": {"task": "image_binary_classification", "model": "sasha/dog-food-vit-base-patch16-224-in21k", "metrics": ["matthews_correlation"], "dataset_name": "sasha/dog-food", "dataset_config": "sasha--dog-food", "dataset_split": "train", "col_mapping": {"image": "image", "target": "label"}}}
2022-08-26T13:56:07+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Image Classification * Model: sasha/dog-food-vit-base-patch16-224-in21k * Dataset: sasha/dog-food * Config: sasha--dog-food * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ahmetgunduz for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: sasha/dog-food-vit-base-patch16-224-in21k\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: sasha/dog-food-vit-base-patch16-224-in21k\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ 13, 106, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Image Classification\n* Model: sasha/dog-food-vit-base-patch16-224-in21k\n* Dataset: sasha/dog-food\n* Config: sasha--dog-food\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
113d1a02c1000ed7d2fc83ea05b793aedf45ed04
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-8f618256-13785901
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T13:55:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-26T13:55:39+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ahmetgunduz for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ 13, 101, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Abdelrahman-Rezk/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
7b656d3d66a90c5f20d5c39934ffdc4a7fca1b66
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: Ahmed007/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ahmetgunduz](https://huggingface.co/ahmetgunduz) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-8f618256-13785902
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T13:55:19+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "Ahmed007/distilbert-base-uncased-finetuned-emotion", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-26T13:55:44+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: Ahmed007/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ahmetgunduz for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Ahmed007/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Ahmed007/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
[ 13, 96, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: Ahmed007/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ahmetgunduz for evaluating this model." ]
f806a9562420f08f3ac7be388014a057449722f5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: tbasic5/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-04ae905d-13795904
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T14:05:11+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "tbasic5/distilbert-base-uncased-finetuned-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-26T14:05:37+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: tbasic5/distilbert-base-uncased-finetuned-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: tbasic5/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: tbasic5/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 98, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: tbasic5/distilbert-base-uncased-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
aacf079fc5d248f979e4a1c7dedf1fcdc07a2b69
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: 123tarunanand/roberta-base-finetuned * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-bddd30a5-13805905
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T14:24:24+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "123tarunanand/roberta-base-finetuned", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-26T14:27:25+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: 123tarunanand/roberta-base-finetuned * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 123tarunanand/roberta-base-finetuned\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 123tarunanand/roberta-base-finetuned\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 96, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 123tarunanand/roberta-base-finetuned\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
86cb54e837d8bd67b8432be7b4a7a4e73f64535f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: autoevaluate/glue-mrpc * Dataset: glue * Config: mrpc * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-fa8727be-13825907
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T15:43:01+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "autoevaluate/glue-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "test", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-08-26T15:43:30+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: autoevaluate/glue-mrpc * Dataset: glue * Config: mrpc * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: autoevaluate/glue-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: autoevaluate/glue-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: autoevaluate/glue-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
5eb65ec3e766cf83f00e4bd20d7f214dfee652da
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-autoevaluate__zero-shot-classification-sample-c8bb9099-11
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T18:53:30+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-08-26T18:54:42+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ 13, 113, 17 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
322604b436887a56f8cbcdd4ed3ecf2e60a2a488
# Dataset Card for "ArabicNLPDataset" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/BihterDass/ArabicTextClassificationDataset] - **Repository:** [https://github.com/BihterDass/ArabicTextClassificationDataset] - **Size of downloaded dataset files:** 23.5 MB - **Size of the generated dataset:** 23.5 MB ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] #### arabic-dataset-v1 - **Size of downloaded dataset files:** 23.5 MB - **Size of the generated dataset:** 23.5 MB ### Data Fields The data fields are the same among all splits. #### arabic-dataset-v-v1 - `text`: a `string` feature. - `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0). ### Data Splits | |train |validation|test | |----|--------:|---------:|---------:| |Data| 80000 | 10000 | 10000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset.
BDas/ArabicNLPDataset
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ar", "license:other", "region:us" ]
2022-08-26T20:33:24+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["ar"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "ArabicNLPDataset"}
2022-09-26T17:52:01+00:00
[]
[ "ar" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #license-other #region-us
Dataset Card for "ArabicNLPDataset" =================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Dataset Preprocessing + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: [URL * Repository: [URL * Size of downloaded dataset files: 23.5 MB * Size of the generated dataset: 23.5 MB ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### arabic-dataset-v1 * Size of downloaded dataset files: 23.5 MB * Size of the generated dataset: 23.5 MB ### Data Fields The data fields are the same among all splits. #### arabic-dataset-v-v1 * 'text': a 'string' feature. * 'label': a classification label, with possible values including 'positive' (2), 'natural' (1), 'negative' (0). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @PnrSvc for adding this dataset.
[ "### Dataset Summary\n\n\nThe dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### arabic-dataset-v1\n\n\n* Size of downloaded dataset files: 23.5 MB\n* Size of the generated dataset: 23.5 MB", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### arabic-dataset-v-v1\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'positive' (2), 'natural' (1), 'negative' (0).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @PnrSvc for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #license-other #region-us \n", "### Dataset Summary\n\n\nThe dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### arabic-dataset-v1\n\n\n* Size of downloaded dataset files: 23.5 MB\n* Size of the generated dataset: 23.5 MB", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### arabic-dataset-v-v1\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'positive' (2), 'natural' (1), 'negative' (0).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @PnrSvc for adding this dataset." ]
[ 103, 71, 10, 11, 6, 34, 17, 54, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Arabic #license-other #region-us \n### Dataset Summary\n\n\nThe dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### arabic-dataset-v1\n\n\n* Size of downloaded dataset files: 23.5 MB\n* Size of the generated dataset: 23.5 MB### Data Fields\n\n\nThe data fields are the same among all splits.#### arabic-dataset-v-v1\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'positive' (2), 'natural' (1), 'negative' (0).### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @PnrSvc for adding this dataset." ]
5cd0772a7dcaeb16cf7ddf6fc845cc35cf5428a9
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `background` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==25` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.4333 | 0.2163 | 0.1746 | 0.2636 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.378 | 0.1827 | 0.1559 | 0.2188 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3928 | 0.1898 | 0.1672 | 0.2208 |
allenai/ms2_sparse_max
[ "task_categories:summarization", "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "language:en", "license:apache-2.0", "region:us" ]
2022-08-26T20:40:42+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"}
2022-11-24T16:27:49+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us
This is a copy of the MS^2 dataset, except the input source documents of its 'validation' split have been replaced by a **sparse** retriever. The retrieval pipeline used: * **query**: The 'background' field of each example * **corpus**: The union of all documents in the 'train', 'validation' and 'test' splits. A document is the concatenation of the 'title' and 'abstract'. * **retriever**: BM25 via PyTerrier with default settings * **top-k strategy**: '"max"', i.e. the number of documents retrieved, 'k', is set as the maximum number of documents seen across examples in this dataset, in this case 'k==25' Retrieval results on the 'train' set: Retrieval results on the 'validation' set: Retrieval results on the 'test' set:
[]
[ "TAGS\n#task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us \n" ]
[ 117 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us \n" ]
23755f1da3b2378649c7259cdb111bf6985dcbf4
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `summary` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"max"`, i.e. the number of documents retrieved, `k`, is set as the maximum number of documents seen across examples in this dataset, in this case `k==10` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.8793 | 0.7460 | 0.2213 | 0.8264 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.8748 | 0.7453 | 0.2173 | 0.8232 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.8775 | 0.7480 | 0.2187 | 0.8250 |
allenai/multinews_sparse_max
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "region:us" ]
2022-08-26T20:41:47+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "paperswithcode_id": "multi-news", "pretty_name": "Multi-News", "train-eval-index": [{"config": "default", "task": "summarization", "task_id": "summarization", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"document": "text", "summary": "target"}, "metrics": [{"type": "rouge", "name": "Rouge"}]}]}
2022-11-24T21:34:53+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us
This is a copy of the Multi-News dataset, except the input source documents of its 'test' split have been replaced by a **sparse** retriever. The retrieval pipeline used: * **query**: The 'summary' field of each example * **corpus**: The union of all documents in the 'train', 'validation' and 'test' splits * **retriever**: BM25 via PyTerrier with default settings * **top-k strategy**: '"max"', i.e. the number of documents retrieved, 'k', is set as the maximum number of documents seen across examples in this dataset, in this case 'k==10' Retrieval results on the 'train' set: Retrieval results on the 'validation' set: Retrieval results on the 'test' set:
[]
[ "TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us \n" ]
[ 91 ]
[ "passage: TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us \n" ]
ff35b25f752f55aa21076b843b81eceaf7720700
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `background` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==17` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.4333 | 0.2163 | 0.2051 | 0.2197 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3780 | 0.1827 | 0.1815 | 0.1792 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3928 | 0.1898 | 0.1951 | 0.1820 |
allenai/ms2_sparse_mean
[ "task_categories:summarization", "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "language:en", "license:apache-2.0", "region:us" ]
2022-08-26T20:41:58+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"}
2022-11-24T16:29:28+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us
This is a copy of the MS^2 dataset, except the input source documents of its 'validation' split have been replaced by a **sparse** retriever. The retrieval pipeline used: * **query**: The 'background' field of each example * **corpus**: The union of all documents in the 'train', 'validation' and 'test' splits. A document is the concatenation of the 'title' and 'abstract'. * **retriever**: BM25 via PyTerrier with default settings * **top-k strategy**: '"mean"', i.e. the number of documents retrieved, 'k', is set as the mean number of documents seen across examples in this dataset, in this case 'k==17' Retrieval results on the 'train' set: Retrieval results on the 'validation' set: Retrieval results on the 'test' set:
[]
[ "TAGS\n#task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us \n" ]
[ 117 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us \n" ]
54ddf91b0bbb3c820729e5b4a3c993edbe22a591
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `background` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits. A document is the concatenation of the `title` and `abstract`. - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"oracle"`, i.e. the number of documents retrieved, `k`, is set as the original number of input documents for each example Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.4333 | 0.2163 | 0.2163 | 0.2163 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3780 | 0.1827 | 0.1827 | 0.1827 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.3928 | 0.1898 | 0.1898 | 0.1898 |
allenai/ms2_sparse_oracle
[ "task_categories:summarization", "task_categories:text2text-generation", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|other-MS^2", "source_datasets:extended|other-Cochrane", "language:en", "license:apache-2.0", "region:us" ]
2022-08-26T20:42:33+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|other-MS^2", "extended|other-Cochrane"], "task_categories": ["summarization", "text2text-generation"], "paperswithcode_id": "multi-document-summarization", "pretty_name": "MSLR Shared Task"}
2022-11-24T16:34:37+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us
This is a copy of the MS^2 dataset, except the input source documents of its 'validation' split have been replaced by a **sparse** retriever. The retrieval pipeline used: * **query**: The 'background' field of each example * **corpus**: The union of all documents in the 'train', 'validation' and 'test' splits. A document is the concatenation of the 'title' and 'abstract'. * **retriever**: BM25 via PyTerrier with default settings * **top-k strategy**: '"oracle"', i.e. the number of documents retrieved, 'k', is set as the original number of input documents for each example Retrieval results on the 'train' set: Retrieval results on the 'validation' set: Retrieval results on the 'test' set:
[]
[ "TAGS\n#task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us \n" ]
[ 117 ]
[ "passage: TAGS\n#task_categories-summarization #task_categories-text2text-generation #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|other-MS^2 #source_datasets-extended|other-Cochrane #language-English #license-apache-2.0 #region-us \n" ]
6c3e377d049a087ca6c116e91de57e8a7673a367
This is a copy of the [Multi-News](https://huggingface.co/datasets/multi_news) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used: - __query__: The `summary` field of each example - __corpus__: The union of all documents in the `train`, `validation` and `test` splits - __retriever__: BM25 via [PyTerrier](https://pyterrier.readthedocs.io/en/latest/) with default settings - __top-k strategy__: `"mean"`, i.e. the number of documents retrieved, `k`, is set as the mean number of documents seen across examples in this dataset, in this case `k==3` Retrieval results on the `train` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.8793 | 0.7460 | 0.6403 | 0.7417 | Retrieval results on the `validation` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.8748 | 0.7453 | 0.6361 | 0.7442 | Retrieval results on the `test` set: | Recall@100 | Rprec | Precision@k | Recall@k | | ----------- | ----------- | ----------- | ----------- | | 0.8775 | 0.7480 | 0.6370 | 0.7443 |
allenai/multinews_sparse_mean
[ "task_categories:summarization", "task_ids:news-articles-summarization", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:en", "license:other", "region:us" ]
2022-08-26T20:42:59+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["summarization"], "task_ids": ["news-articles-summarization"], "paperswithcode_id": "multi-news", "pretty_name": "Multi-News", "train-eval-index": [{"config": "default", "task": "summarization", "task_id": "summarization", "splits": {"train_split": "train", "eval_split": "test"}, "col_mapping": {"document": "text", "summary": "target"}, "metrics": [{"type": "rouge", "name": "Rouge"}]}]}
2022-11-24T21:37:31+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us
This is a copy of the Multi-News dataset, except the input source documents of its 'test' split have been replaced by a **sparse** retriever. The retrieval pipeline used: * **query**: The 'summary' field of each example * **corpus**: The union of all documents in the 'train', 'validation' and 'test' splits * **retriever**: BM25 via PyTerrier with default settings * **top-k strategy**: '"mean"', i.e. the number of documents retrieved, 'k', is set as the mean number of documents seen across examples in this dataset, in this case 'k==3' Retrieval results on the 'train' set: Retrieval results on the 'validation' set: Retrieval results on the 'test' set:
[]
[ "TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us \n" ]
[ 91 ]
[ "passage: TAGS\n#task_categories-summarization #task_ids-news-articles-summarization #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-other #region-us \n" ]
21dbd148b6f8581ce774fbe1a84d225aa0dd5a06
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-autoevaluate__zero-shot-classification-sample-18ef74e8-21
[ "autotrain", "evaluation", "region:us" ]
2022-08-26T23:13:02+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/zero-shot-classification-sample"], "eval_info": {"task": "text_zero_shot_classification", "model": "autoevaluate/zero-shot-classification", "metrics": [], "dataset_name": "autoevaluate/zero-shot-classification-sample", "dataset_config": "autoevaluate--zero-shot-classification-sample", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}}
2022-08-26T23:14:03+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Zero-Shot Text Classification * Model: autoevaluate/zero-shot-classification * Dataset: autoevaluate/zero-shot-classification-sample * Config: autoevaluate--zero-shot-classification-sample * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @mathemakitten for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
[ 13, 113, 17 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: autoevaluate/zero-shot-classification\n* Dataset: autoevaluate/zero-shot-classification-sample\n* Config: autoevaluate--zero-shot-classification-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @mathemakitten for evaluating this model." ]
a3692ff6d4f7958e6eea80025ac7ae9f4472cfe0
# Dataset Card for "EnglishNLPDataset" ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Preprocessing](#dataset-preprocessing) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/BihterDass/EnglishTextClassificationDataset] - **Repository:** [https://github.com/BihterDass/EnglishTextClassificationDataset] - **Size of downloaded dataset files:** 8.71 MB - **Size of the generated dataset:** 8.71 MB ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] #### english-dataset-v1 - **Size of downloaded dataset files:** 8.71 MB - **Size of the generated dataset:** 8.71 MB ### Data Fields The data fields are the same among all splits. #### english-dataset-v-v1 - `text`: a `string` feature. - `label`: a classification label, with possible values including `positive` (2), `natural` (1), `negative` (0). ### Data Splits | |train |validation|test | |----|--------:|---------:|---------:| |Data| 80000 | 10000 | 10000 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@PnrSvc](https://github.com/PnrSvc) for adding this dataset.
BDas/EnglishNLPDataset
[ "task_categories:text-classification", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:other", "region:us" ]
2022-08-27T09:58:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "EnglishNLPDataset"}
2022-08-27T10:13:01+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-other #region-us
Dataset Card for "EnglishNLPDataset" ==================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Dataset Preprocessing + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: [URL * Repository: [URL * Size of downloaded dataset files: 8.71 MB * Size of the generated dataset: 8.71 MB ### Dataset Summary The dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### english-dataset-v1 * Size of downloaded dataset files: 8.71 MB * Size of the generated dataset: 8.71 MB ### Data Fields The data fields are the same among all splits. #### english-dataset-v-v1 * 'text': a 'string' feature. * 'label': a classification label, with possible values including 'positive' (2), 'natural' (1), 'negative' (0). ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information ### Contributions Thanks to @PnrSvc for adding this dataset.
[ "### Dataset Summary\n\n\nThe dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### english-dataset-v1\n\n\n* Size of downloaded dataset files: 8.71 MB\n* Size of the generated dataset: 8.71 MB", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### english-dataset-v-v1\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'positive' (2), 'natural' (1), 'negative' (0).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @PnrSvc for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-other #region-us \n", "### Dataset Summary\n\n\nThe dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### english-dataset-v1\n\n\n* Size of downloaded dataset files: 8.71 MB\n* Size of the generated dataset: 8.71 MB", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### english-dataset-v-v1\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'positive' (2), 'natural' (1), 'negative' (0).", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information", "### Contributions\n\n\nThanks to @PnrSvc for adding this dataset." ]
[ 102, 71, 10, 11, 6, 33, 17, 53, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 6, 18 ]
[ "passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-other #region-us \n### Dataset Summary\n\n\nThe dataset was compiled from user comments from e-commerce sites. It consists of 10,000 validations, 10,000 tests and 80000 train data. Data were classified into 3 classes (positive(pos), negative(neg) and natural(nor). The data is available to you on github.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### english-dataset-v1\n\n\n* Size of downloaded dataset files: 8.71 MB\n* Size of the generated dataset: 8.71 MB### Data Fields\n\n\nThe data fields are the same among all splits.#### english-dataset-v-v1\n\n\n* 'text': a 'string' feature.\n* 'label': a classification label, with possible values including 'positive' (2), 'natural' (1), 'negative' (0).### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information### Contributions\n\n\nThanks to @PnrSvc for adding this dataset." ]
2a76ba3097a5386ab779d20e6a9f86c14de143e0
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: deepset/xlm-roberta-base-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@sakamoto](https://huggingface.co/sakamoto) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-d38f255e-13865909
[ "autotrain", "evaluation", "region:us" ]
2022-08-27T12:12:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-base-squad2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-27T12:15:49+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: deepset/xlm-roberta-base-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @sakamoto for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @sakamoto for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @sakamoto for evaluating this model." ]
[ 13, 92, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @sakamoto for evaluating this model." ]
04f6537e418eeb88863d617eb27817cc496522d7
This dataset we prepared using the Scanned receipts OCR and information extraction(SROIE) dataset. The SROIE dataset contains 973 scanned receipts in English language. Cropping the bounding boxes from each of the receipts to generate this text-recognition dataset resulted in 33626 images for train set and 18704 images for the test set. The text annotations for all the images inside a split are stored in a metadata.jsonl file. usage: from dataset import load_dataset data = load_dataset("priyank-m/SROIE_2019_text_recognition") source of raw SROIE dataset: https://www.kaggle.com/datasets/urbikn/sroie-datasetv2
priyank-m/SROIE_2019_text_recognition
[ "task_categories:image-to-text", "task_ids:image-captioning", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "text-recognition", "recognition", "region:us" ]
2022-08-27T19:56:31+00:00
{"annotations_creators": [], "language_creators": [], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-to-text"], "task_ids": ["image-captioning"], "pretty_name": "SROIE_2019_text_recognition", "tags": ["text-recognition", "recognition"]}
2022-08-27T20:38:24+00:00
[]
[ "en" ]
TAGS #task_categories-image-to-text #task_ids-image-captioning #multilinguality-monolingual #size_categories-10K<n<100K #language-English #text-recognition #recognition #region-us
This dataset we prepared using the Scanned receipts OCR and information extraction(SROIE) dataset. The SROIE dataset contains 973 scanned receipts in English language. Cropping the bounding boxes from each of the receipts to generate this text-recognition dataset resulted in 33626 images for train set and 18704 images for the test set. The text annotations for all the images inside a split are stored in a URL file. usage: from dataset import load_dataset data = load_dataset("priyank-m/SROIE_2019_text_recognition") source of raw SROIE dataset: URL
[]
[ "TAGS\n#task_categories-image-to-text #task_ids-image-captioning #multilinguality-monolingual #size_categories-10K<n<100K #language-English #text-recognition #recognition #region-us \n" ]
[ 63 ]
[ "passage: TAGS\n#task_categories-image-to-text #task_ids-image-captioning #multilinguality-monolingual #size_categories-10K<n<100K #language-English #text-recognition #recognition #region-us \n" ]
ae9e759dd31d60479354cc06e4f4291c0c27bbca
# Unsplash Lite Dataset Photos This dataset is linked to the Unsplash Lite dataset containing data on 25K images from Unsplash. The dataset here only includes data from a single file `photos.tsv000`. The dataset builder script streams this data directly from the Unsplash 25K dataset source. For full details, please see the [Unsplash Dataset GitHub repo](https://github.com/unsplash/datasets), or read the preview (copied from the repo) below. --- # The Unsplash Dataset ![](https://unsplash.com/blog/content/images/2020/08/dataheader.jpg) The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning. The Unsplash Dataset is offered in two datasets: - the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches - the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches As the Unsplash library continues to grow, weโ€™ll release updates to the dataset with new fields and new images, with each subsequent release being [semantically versioned](https://semver.org/). We welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can [open an issue](https://github.com/unsplash/datasets/issues/new/choose) to report a problem or to let us know what you would like to see in the next release of the datasets. For more on the Unsplash Dataset, see [our announcement](https://unsplash.com/blog/the-unsplash-dataset/) and [site](https://unsplash.com/data). ## Download ### Lite Dataset The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). [โฌ‡๏ธ Download the Lite dataset](https://unsplash.com/data/lite/latest) [~650MB compressed, ~1.4GB raw] ### Full Dataset The Full dataset is available for non-commercial usage and all uses must abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md). To access, please go to [unsplash.com/data](https://unsplash.com/data) and request access. The dataset weighs ~20 GB compressed (~43GB raw)). ## Documentation See the [documentation for a complete list of tables and fields](https://github.com/unsplash/datasets/blob/master/DOCS.md). ## Usage You can follow these examples to load the dataset in these common formats: - [Load the dataset in a PostgreSQL database](https://github.com/unsplash/datasets/tree/master/how-to/psql) - [Load the dataset in a Python environment](https://github.com/unsplash/datasets/tree/master/how-to/python) - [Submit an example doc](https://github.com/unsplash/datasets/blob/master/how-to/README.md#submit-an-example) ## Share your work We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data. We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at [[email protected]](mailto:[email protected]). If you're using the dataset in a research paper, you can attribute the dataset as `Unsplash Lite Dataset 1.2.0` or `Unsplash Full Dataset 1.2.0` and link to the permalink [`unsplash.com/data`](https://unsplash.com/data). ---- The Unsplash Dataset is made available for research purposes. [It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md). To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers). ![](https://unsplash.com/blog/content/images/2020/08/footer-alt.jpg)
jamescalam/unsplash-25k-photos
[ "task_categories:image-to-image", "task_categories:image-classification", "task_categories:image-to-text", "task_categories:text-to-image", "task_categories:zero-shot-image-classification", "annotations_creators:found", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "language:en", "images", "unsplash", "photos", "region:us" ]
2022-08-27T21:01:09+00:00
{"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-to-image", "image-classification", "image-to-text", "text-to-image", "zero-shot-image-classification"], "task_ids": [], "pretty_name": "Unsplash Lite 25K Photos", "tags": ["images", "unsplash", "photos"]}
2022-09-13T12:02:46+00:00
[]
[ "en" ]
TAGS #task_categories-image-to-image #task_categories-image-classification #task_categories-image-to-text #task_categories-text-to-image #task_categories-zero-shot-image-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-English #images #unsplash #photos #region-us
# Unsplash Lite Dataset Photos This dataset is linked to the Unsplash Lite dataset containing data on 25K images from Unsplash. The dataset here only includes data from a single file 'photos.tsv000'. The dataset builder script streams this data directly from the Unsplash 25K dataset source. For full details, please see the Unsplash Dataset GitHub repo, or read the preview (copied from the repo) below. --- # The Unsplash Dataset ![](URL The Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning. The Unsplash Dataset is offered in two datasets: - the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches - the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches As the Unsplash library continues to grow, weโ€™ll release updates to the dataset with new fields and new images, with each subsequent release being semantically versioned. We welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can open an issue to report a problem or to let us know what you would like to see in the next release of the datasets. For more on the Unsplash Dataset, see our announcement and site. ## Download ### Lite Dataset The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by the terms. โฌ‡๏ธ Download the Lite dataset [~650MB compressed, ~1.4GB raw] ### Full Dataset The Full dataset is available for non-commercial usage and all uses must abide by the terms. To access, please go to URL and request access. The dataset weighs ~20 GB compressed (~43GB raw)). ## Documentation See the documentation for a complete list of tables and fields. ## Usage You can follow these examples to load the dataset in these common formats: - Load the dataset in a PostgreSQL database - Load the dataset in a Python environment - Submit an example doc ## Share your work We're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data. We'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at data@URL. If you're using the dataset in a research paper, you can attribute the dataset as 'Unsplash Lite Dataset 1.2.0' or 'Unsplash Full Dataset 1.2.0' and link to the permalink 'URL ---- The Unsplash Dataset is made available for research purposes. It cannot be used to redistribute the images contained within. To use the Unsplash library in a product, see the Unsplash API. ![](URL
[ "# Unsplash Lite Dataset Photos\n\nThis dataset is linked to the Unsplash Lite dataset containing data on 25K images from Unsplash. The dataset here only includes data from a single file 'photos.tsv000'. The dataset builder script streams this data directly from the Unsplash 25K dataset source.\n\nFor full details, please see the Unsplash Dataset GitHub repo, or read the preview (copied from the repo) below.\n\n---", "# The Unsplash Dataset\n\n![](URL\n\nThe Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning.\n\nThe Unsplash Dataset is offered in two datasets:\n\n- the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches\n- the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches\n\nAs the Unsplash library continues to grow, weโ€™ll release updates to the dataset with new fields and new images, with each subsequent release being semantically versioned.\n\nWe welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can open an issue to report a problem or to let us know what you would like to see in the next release of the datasets.\n\nFor more on the Unsplash Dataset, see our announcement and site.", "## Download", "### Lite Dataset\n\nThe Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by the terms.\n\nโฌ‡๏ธ Download the Lite dataset [~650MB compressed, ~1.4GB raw]", "### Full Dataset\n\nThe Full dataset is available for non-commercial usage and all uses must abide by the terms. To access, please go to URL and request access. The dataset weighs ~20 GB compressed (~43GB raw)).", "## Documentation\n\nSee the documentation for a complete list of tables and fields.", "## Usage\n\nYou can follow these examples to load the dataset in these common formats:\n\n- Load the dataset in a PostgreSQL database\n- Load the dataset in a Python environment\n- Submit an example doc", "## Share your work\n\nWe're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.\n\nWe'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at data@URL.\n\nIf you're using the dataset in a research paper, you can attribute the dataset as 'Unsplash Lite Dataset 1.2.0' or 'Unsplash Full Dataset 1.2.0' and link to the permalink 'URL\n\n----\n\nThe Unsplash Dataset is made available for research purposes. It cannot be used to redistribute the images contained within. To use the Unsplash library in a product, see the Unsplash API.\n\n![](URL" ]
[ "TAGS\n#task_categories-image-to-image #task_categories-image-classification #task_categories-image-to-text #task_categories-text-to-image #task_categories-zero-shot-image-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-English #images #unsplash #photos #region-us \n", "# Unsplash Lite Dataset Photos\n\nThis dataset is linked to the Unsplash Lite dataset containing data on 25K images from Unsplash. The dataset here only includes data from a single file 'photos.tsv000'. The dataset builder script streams this data directly from the Unsplash 25K dataset source.\n\nFor full details, please see the Unsplash Dataset GitHub repo, or read the preview (copied from the repo) below.\n\n---", "# The Unsplash Dataset\n\n![](URL\n\nThe Unsplash Dataset is made up of over 250,000+ contributing global photographers and data sourced from hundreds of millions of searches across a nearly unlimited number of uses and contexts. Due to the breadth of intent and semantics contained within the Unsplash dataset, it enables new opportunities for research and learning.\n\nThe Unsplash Dataset is offered in two datasets:\n\n- the Lite dataset: available for commercial and noncommercial usage, containing 25k nature-themed Unsplash photos, 25k keywords, and 1M searches\n- the Full dataset: available for noncommercial usage, containing 3M+ high-quality Unsplash photos, 5M keywords, and over 250M searches\n\nAs the Unsplash library continues to grow, weโ€™ll release updates to the dataset with new fields and new images, with each subsequent release being semantically versioned.\n\nWe welcome any feedback regarding the content of the datasets or their format. With your input, we hope to close the gap between the data we provide and the data that you would like to leverage. You can open an issue to report a problem or to let us know what you would like to see in the next release of the datasets.\n\nFor more on the Unsplash Dataset, see our announcement and site.", "## Download", "### Lite Dataset\n\nThe Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos. It can be used for both commercial and non-commercial usage, provided you abide by the terms.\n\nโฌ‡๏ธ Download the Lite dataset [~650MB compressed, ~1.4GB raw]", "### Full Dataset\n\nThe Full dataset is available for non-commercial usage and all uses must abide by the terms. To access, please go to URL and request access. The dataset weighs ~20 GB compressed (~43GB raw)).", "## Documentation\n\nSee the documentation for a complete list of tables and fields.", "## Usage\n\nYou can follow these examples to load the dataset in these common formats:\n\n- Load the dataset in a PostgreSQL database\n- Load the dataset in a Python environment\n- Submit an example doc", "## Share your work\n\nWe're making this data open and available with the hopes of enabling researchers and developers to discover interesting and useful connections in the data.\n\nWe'd love to see what you create, whether that's a research paper, a machine learning model, a blog post, or just an interesting discovery in the data. Send us an email at data@URL.\n\nIf you're using the dataset in a research paper, you can attribute the dataset as 'Unsplash Lite Dataset 1.2.0' or 'Unsplash Full Dataset 1.2.0' and link to the permalink 'URL\n\n----\n\nThe Unsplash Dataset is made available for research purposes. It cannot be used to redistribute the images contained within. To use the Unsplash library in a product, see the Unsplash API.\n\n![](URL" ]
[ 120, 108, 307, 2, 76, 59, 17, 48, 184 ]
[ "passage: TAGS\n#task_categories-image-to-image #task_categories-image-classification #task_categories-image-to-text #task_categories-text-to-image #task_categories-zero-shot-image-classification #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-English #images #unsplash #photos #region-us \n# Unsplash Lite Dataset Photos\n\nThis dataset is linked to the Unsplash Lite dataset containing data on 25K images from Unsplash. The dataset here only includes data from a single file 'photos.tsv000'. The dataset builder script streams this data directly from the Unsplash 25K dataset source.\n\nFor full details, please see the Unsplash Dataset GitHub repo, or read the preview (copied from the repo) below.\n\n---" ]
82e568dfe8ee3e016c18290dbbbddd010479eb87
30,000 256x256 mel spectrograms of 5 second samples that have been used in music, sourced from [WhoSampled](https://whosampled.com) and [YouTube](https://youtube.com). The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models. ``` x_res = 256 y_res = 256 sample_rate = 22050 n_fft = 2048 hop_length = 512 ```
teticio/audio-diffusion-breaks-256
[ "task_categories:image-to-image", "size_categories:10K<n<100K", "audio", "spectrograms", "region:us" ]
2022-08-27T21:11:40+00:00
{"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["image-to-image"], "task_ids": [], "pretty_name": "Mel spectrograms of sampled music", "tags": ["audio", "spectrograms"]}
2022-11-09T10:50:38+00:00
[]
[]
TAGS #task_categories-image-to-image #size_categories-10K<n<100K #audio #spectrograms #region-us
30,000 256x256 mel spectrograms of 5 second samples that have been used in music, sourced from WhoSampled and YouTube. The code to convert from audio to spectrogram and vice versa can be found in URL along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
[]
[ "TAGS\n#task_categories-image-to-image #size_categories-10K<n<100K #audio #spectrograms #region-us \n" ]
[ 38 ]
[ "passage: TAGS\n#task_categories-image-to-image #size_categories-10K<n<100K #audio #spectrograms #region-us \n" ]
5cadc7b30860162ea82aa2729102c02485d152b3
CC12M of flax-community/conceptual-captions-12 translated from English to Korean.
QuoQA-NLP/KoCC12M
[ "region:us" ]
2022-08-28T05:30:31+00:00
{}
2022-08-28T05:44:47+00:00
[]
[]
TAGS #region-us
CC12M of flax-community/conceptual-captions-12 translated from English to Korean.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
71d5c298b9dc85f34b468eb393301fa436405bbb
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: nlpconnect/deberta-v3-xsmall-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ankur310974](https://huggingface.co/ankur310974) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-c78baf7d-13885910
[ "autotrain", "evaluation", "region:us" ]
2022-08-28T09:49:42+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "nlpconnect/deberta-v3-xsmall-squad2", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-28T09:52:35+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: nlpconnect/deberta-v3-xsmall-squad2 * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ankur310974 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/deberta-v3-xsmall-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ankur310974 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/deberta-v3-xsmall-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ankur310974 for evaluating this model." ]
[ 13, 100, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/deberta-v3-xsmall-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ankur310974 for evaluating this model." ]
7ad42c0cbd4e102579d6323231e05a87c739318b
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: nlpconnect/deberta-v3-xsmall-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@ankur310794](https://huggingface.co/ankur310794) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad-4690f1f9-13895911
[ "autotrain", "evaluation", "region:us" ]
2022-08-28T09:49:47+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "nlpconnect/deberta-v3-xsmall-squad2", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-28T09:52:24+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: nlpconnect/deberta-v3-xsmall-squad2 * Dataset: squad * Config: plain_text * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @ankur310794 for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/deberta-v3-xsmall-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ankur310794 for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/deberta-v3-xsmall-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @ankur310794 for evaluating this model." ]
[ 13, 95, 17 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: nlpconnect/deberta-v3-xsmall-squad2\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @ankur310794 for evaluating this model." ]
2afbf37414683a8ad881fe0dc8913b1f246b9aa7
English Debate Motions gathered by University of Tokyo Debate Society @misc{english-debate-motions-utds, title={english-debate-motions-utds}, author={members of the University of Tokyo Debate Society}, year={2022}, }
kokhayas/english-debate-motions-utds
[ "region:us" ]
2022-08-28T11:54:21+00:00
{}
2022-08-30T02:18:43+00:00
[]
[]
TAGS #region-us
English Debate Motions gathered by University of Tokyo Debate Society @misc{english-debate-motions-utds, title={english-debate-motions-utds}, author={members of the University of Tokyo Debate Society}, year={2022}, }
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
72aa912bbf09c96c6cf38bb76bec24e8d8a82367
# Dataset Card for "UnpredicTable-full" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
unpredictable/unpredictable_full
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
2022-08-28T15:35:07+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-full"}
2022-08-28T17:42:31+00:00
[]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us
# Dataset Card for "UnpredicTable-full" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * UnpredicTable-support-google-com ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-full\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for \"UnpredicTable-full\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Licensing Information\nApache 2.0" ]
[ 316, 27, 112, 28, 227, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 130, 8, 97, 112, 13, 5, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n# Dataset Card for \"UnpredicTable-full\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.## Considerations for Using the Data" ]
ec38db9a85ca5dca7ef9211bbb73cc27e1a47208
# Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
unpredictable/unpredictable_5k
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
2022-08-28T16:37:14+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-5k"}
2022-08-28T17:13:41+00:00
[]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us
# Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * UnpredicTable-support-google-com ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-5k\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for \"UnpredicTable-5k\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Licensing Information\nApache 2.0" ]
[ 316, 27, 112, 28, 227, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 130, 8, 97, 112, 13, 5, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n# Dataset Card for \"UnpredicTable-5k\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.## Considerations for Using the Data" ]
76db35834d995d0bd5d14d1352277461fe3f225f
# Dataset Card for "UnpredicTable-support-google-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
unpredictable/unpredictable_support-google-com
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
2022-08-28T17:12:13+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-support-google-com"}
2022-08-28T17:25:26+00:00
[]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us
# Dataset Card for "UnpredicTable-support-google-com" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * UnpredicTable-support-google-com ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-support-google-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for \"UnpredicTable-support-google-com\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Licensing Information\nApache 2.0" ]
[ 316, 31, 112, 28, 227, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 130, 8, 97, 112, 13, 5, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n# Dataset Card for \"UnpredicTable-support-google-com\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.## Considerations for Using the Data" ]
7b0b1a6c2c61cc1f9304725ceb54c826be65816f
# Dataset Card for "UnpredicTable-unique" - Dataset of Few-shot Tasks from Tables ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** https://github.com/AnonCodeShare/few-shot-adaptation - **Paper:** Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full): Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full), which comprises 413,299 tasks from 23,744 unique websites. * [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique): This is the same as [UnpredicTable-full](https://huggingface.co/datasets/unpredictable/unpredictable_full) but filtered to have a maximum of one task per website. [UnpredicTable-unique](https://huggingface.co/datasets/unpredictable/unpredictable_unique) contains exactly 23,744 tasks from 23,744 websites. * [UnpredicTable-5k](https://huggingface.co/datasets/unpredictable/unpredictable_5k): This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * [UnpredicTable-support-google-com](https://huggingface.co/datasets/unpredictable/unpredictable_support-google-com) ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (http://webdatacommons.org/webtables/2015/EnglishStatistics.html). The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/). ### Personal and Sensitive Information The data was extracted from [WDC Web Table Corpora](http://webdatacommons.org/webtables/), which in turn extracted tables from the [Common Crawl](https://commoncrawl.org/). We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
unpredictable/unpredictable_unique
[ "task_categories:multiple-choice", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:text2text-generation", "task_categories:table-question-answering", "task_categories:text-generation", "task_categories:text-classification", "task_categories:tabular-classification", "task_ids:multiple-choice-qa", "task_ids:extractive-qa", "task_ids:open-domain-qa", "task_ids:closed-domain-qa", "task_ids:closed-book-qa", "task_ids:open-book-qa", "task_ids:language-modeling", "task_ids:multi-class-classification", "task_ids:natural-language-inference", "task_ids:topic-classification", "task_ids:multi-label-classification", "task_ids:tabular-multi-class-classification", "task_ids:tabular-multi-label-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "region:us" ]
2022-08-28T17:12:33+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["multiple-choice", "question-answering", "zero-shot-classification", "text2text-generation", "table-question-answering", "text-generation", "text-classification", "tabular-classification"], "task_ids": ["multiple-choice-qa", "extractive-qa", "open-domain-qa", "closed-domain-qa", "closed-book-qa", "open-book-qa", "language-modeling", "multi-class-classification", "natural-language-inference", "topic-classification", "multi-label-classification", "tabular-multi-class-classification", "tabular-multi-label-classification"], "pretty_name": "UnpredicTable-unique"}
2022-08-28T17:26:18+00:00
[]
[ "en" ]
TAGS #task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us
# Dataset Card for "UnpredicTable-unique" - Dataset of Few-shot Tasks from Tables ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Repository: URL - Paper: Few-shot Adaptation Works with UnpredicTable Data ### Dataset Summary The UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance. There are several dataset versions available: * UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites. * UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites. * UnpredicTable-5k: This dataset contains 5k random tables from the full dataset. * UnpredicTable data subsets based on the website of origin: * UnpredicTable-support-google-com ### Supported Tasks and Leaderboards Since the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc. The intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset. ### Languages English ## Dataset Structure ### Data Instances Each task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from. There are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'. ### Data Fields 'task': task identifier 'input': column elements of a specific row in the table. 'options': for multiple choice classification, it provides the options to choose from. 'output': target column element of the same row as input. 'pageTitle': the title of the page containing the table. 'outputColName': output column name 'url': url to the website containing the table 'wdcFile': WDC Web Table Corpus file ### Data Splits The UnpredicTable datasets do not come with additional data splits. ## Dataset Creation ### Curation Rationale Few-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning. ### Source Data #### Initial Data Collection and Normalization We use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline. #### Who are the source language producers? The dataset is extracted from WDC Web Table Corpora. ### Personal and Sensitive Information The data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset. ## Considerations for Using the Data ### Social Impact of Dataset This dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations. ### Discussion of Biases Since our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset. ### Other Known Limitations No additional known limitations. ## Additional Information ### Licensing Information Apache 2.0
[ "# Dataset Card for \"UnpredicTable-unique\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Licensing Information\nApache 2.0" ]
[ "TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for \"UnpredicTable-unique\" - Dataset of Few-shot Tasks from Tables", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com", "### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.", "### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.", "## Dataset Creation", "### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.", "### Source Data", "#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.", "#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.", "### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThis dataset is intended for use as a research resource to investigate the relationship between training data and few-shot learning. As such, it contains high- and low-quality data, as well as diverse content that may be untruthful or inappropriate. Without careful investigation, it should not be used for training models that will be deployed for use in decision-critical or user-facing situations.", "### Discussion of Biases\n\nSince our dataset contains tables that are scraped from the web, it will also contain many toxic, racist, sexist, and otherwise harmful biases and texts. We have not run any analysis on the biases prevalent in our datasets. Neither have we explicitly filtered the content. This implies that a model trained on our dataset may potentially reflect harmful biases and toxic text that exist in our dataset.", "### Other Known Limitations\n\nNo additional known limitations.", "## Additional Information", "### Licensing Information\nApache 2.0" ]
[ 316, 28, 112, 28, 227, 169, 5, 6, 205, 122, 23, 5, 141, 4, 109, 23, 130, 8, 97, 112, 13, 5, 9 ]
[ "passage: TAGS\n#task_categories-multiple-choice #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text2text-generation #task_categories-table-question-answering #task_categories-text-generation #task_categories-text-classification #task_categories-tabular-classification #task_ids-multiple-choice-qa #task_ids-extractive-qa #task_ids-open-domain-qa #task_ids-closed-domain-qa #task_ids-closed-book-qa #task_ids-open-book-qa #task_ids-language-modeling #task_ids-multi-class-classification #task_ids-natural-language-inference #task_ids-topic-classification #task_ids-multi-label-classification #task_ids-tabular-multi-class-classification #task_ids-tabular-multi-label-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-English #license-apache-2.0 #region-us \n# Dataset Card for \"UnpredicTable-unique\" - Dataset of Few-shot Tasks from Tables## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information## Dataset Description\n\n- Repository: URL\n- Paper: Few-shot Adaptation Works with UnpredicTable Data", "passage: ### Dataset Summary\n\nThe UnpredicTable dataset consists of web tables formatted as few-shot tasks for fine-tuning language models to improve their few-shot performance.\n\nThere are several dataset versions available:\n\n* UnpredicTable-full: Starting from the initial WTC corpus of 50M tables, we apply our tables-to-tasks procedure to produce our resulting dataset, UnpredicTable-full, which comprises 413,299 tasks from 23,744 unique websites.\n\n* UnpredicTable-unique: This is the same as UnpredicTable-full but filtered to have a maximum of one task per website. UnpredicTable-unique contains exactly 23,744 tasks from 23,744 websites.\n\n* UnpredicTable-5k: This dataset contains 5k random tables from the full dataset.\n\n* UnpredicTable data subsets based on the website of origin:\n * UnpredicTable-support-google-com### Supported Tasks and Leaderboards\n\nSince the tables come from the web, the distribution of tasks and topics is very broad. The shape of our dataset is very wide, i.e., we have 1000's of tasks, while each task has only a few examples, compared to most current NLP datasets which are very deep, i.e., 10s of tasks with many examples. This implies that our dataset covers a broad range of potential tasks, e.g., multiple-choice, question-answering, table-question-answering, text-classification, etc.\n\nThe intended use of this dataset is to improve few-shot performance by fine-tuning/pre-training on our dataset.### Languages\n\nEnglish## Dataset Structure### Data Instances\n\nEach task is represented as a jsonline file and consists of several few-shot examples. Each example is a dictionary containing a field 'task', which identifies the task, followed by an 'input', 'options', and 'output' field. The 'input' field contains several column elements of the same row in the table, while the 'output' field is a target which represents an individual column of the same row. Each task contains several such examples which can be concatenated as a few-shot task. In the case of multiple choice classification, the 'options' field contains the possible classes that a model needs to choose from.\n\nThere are also additional meta-data fields such as 'pageTitle', 'title', 'outputColName', 'url', 'wdcFile'.### Data Fields\n\n'task': task identifier\n\n'input': column elements of a specific row in the table. \n\n'options': for multiple choice classification, it provides the options to choose from.\n\n'output': target column element of the same row as input.\n\n'pageTitle': the title of the page containing the table. \n\n'outputColName': output column name\n\n'url': url to the website containing the table\n\n'wdcFile': WDC Web Table Corpus file", "passage: ### Data Splits\n\nThe UnpredicTable datasets do not come with additional data splits.## Dataset Creation### Curation Rationale\n\nFew-shot training on multi-task datasets has been demonstrated to improve language models' few-shot learning (FSL) performance on new tasks, but it is unclear which training tasks lead to effective downstream task adaptation. Few-shot learning datasets are typically produced with expensive human curation, limiting the scale and diversity of the training tasks available to study. As an alternative source of few-shot data, we automatically extract 413,299 tasks from diverse internet tables. We provide this as a research resource to investigate the relationship between training data and few-shot learning.### Source Data#### Initial Data Collection and Normalization\n\nWe use internet tables from the English-language Relational Subset of the WDC Web Table Corpus 2015 (WTC). The WTC dataset tables were extracted from the July 2015 Common Crawl web corpus (URL The dataset contains 50,820,165 tables from 323,160 web domains. We then convert the tables into few-shot learning tasks. Please see our publication for more details on the data collection and conversion pipeline.#### Who are the source language producers?\n\nThe dataset is extracted from WDC Web Table Corpora.### Personal and Sensitive Information\n\nThe data was extracted from WDC Web Table Corpora, which in turn extracted tables from the Common Crawl. We did not filter the data in any way. Thus any user identities or otherwise sensitive information (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history, etc.) might be contained in our dataset.## Considerations for Using the Data" ]
5f17b065b8739c725a84d3a6965ed7f040cdae04
The Wikipedia finetune data used to train visual features for the adaption of vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the "20200501.en" revision of the [wikipedia dataset](https://huggingface.co/datasets/wikipedia) on Huggingface.
Lo/adapt-pre-trained-VL-models-to-text-data-Wikipedia-finetune
[ "multilinguality:monolingual", "language:en", "license:cc-by-sa-3.0", "region:us" ]
2022-08-29T07:17:43+00:00
{"language": ["en"], "license": ["cc-by-sa-3.0"], "multilinguality": ["monolingual"]}
2022-08-29T07:27:33+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us
The Wikipedia finetune data used to train visual features for the adaption of vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the "URL" revision of the wikipedia dataset on Huggingface.
[]
[ "TAGS\n#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us \n" ]
[ 29 ]
[ "passage: TAGS\n#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us \n" ]
d6fe56688ae0435f11bcc1860fe7de01e0d3ffe4
The LXMERT text train data used to train BERT-base baselines and adapt vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the data made available by the [LXMERT repo](https://github.com/airsplay/lxmert).
Lo/adapt-pre-trained-VL-models-to-text-data-LXMERT
[ "multilinguality:monolingual", "language:en", "license:mit", "region:us" ]
2022-08-29T07:19:10+00:00
{"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"]}
2022-08-29T07:30:05+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-mit #region-us
The LXMERT text train data used to train BERT-base baselines and adapt vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the data made available by the LXMERT repo.
[]
[ "TAGS\n#multilinguality-monolingual #language-English #license-mit #region-us \n" ]
[ 23 ]
[ "passage: TAGS\n#multilinguality-monolingual #language-English #license-mit #region-us \n" ]
ea1623c9c1f7b042aff76cbcf1ca5c0a3ef8e114
The LXMERT text finetune data used to train visual features for the adaption of vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the data made available by the [LXMERT repo](https://github.com/airsplay/lxmert).
Lo/adapt-pre-trained-VL-models-to-text-data-LXMERT-finetune
[ "multilinguality:monolingual", "language:en", "license:mit", "region:us" ]
2022-08-29T07:20:45+00:00
{"language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"]}
2022-08-29T07:31:45+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-mit #region-us
The LXMERT text finetune data used to train visual features for the adaption of vision-and-language models to text-only tasks in the paper "How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?". The data has been created from the data made available by the LXMERT repo.
[]
[ "TAGS\n#multilinguality-monolingual #language-English #license-mit #region-us \n" ]
[ 23 ]
[ "passage: TAGS\n#multilinguality-monolingual #language-English #license-mit #region-us \n" ]
683b752aaead07750f544d18639ee871f912a697
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-f7900ebf-13965913
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T08:37:05+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": [], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-08-29T08:37:29+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Text Classification * Model: autoevaluate/binary-classification * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 89, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
513ed4cfbc29df4be9c167bef472b3a4aeae7dca
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: autoevaluate/glue-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-e9a4b61a-13985914
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T09:05:27+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "autoevaluate/glue-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-08-29T09:05:51+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: autoevaluate/glue-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: autoevaluate/glue-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: autoevaluate/glue-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: autoevaluate/glue-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
6b3840bc7bb94a480e42c79200caf31a3b598fd1
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: autoevaluate/glue-qqp * Dataset: glue * Config: qqp * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-4805e982-13995915
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T09:05:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "autoevaluate/glue-qqp", "metrics": [], "dataset_name": "glue", "dataset_config": "qqp", "dataset_split": "validation", "col_mapping": {"text1": "question1", "text2": "question2", "target": "label"}}}
2022-08-29T09:07:21+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: autoevaluate/glue-qqp * Dataset: glue * Config: qqp * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: autoevaluate/glue-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: autoevaluate/glue-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 89, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: autoevaluate/glue-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
9cee6f8497cb95ce974e7e7e511c347c5a572d8f
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: autoevaluate/squad-sample * Config: autoevaluate--squad-sample * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-autoevaluate__squad-sample-11b52eb1-14005916
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T09:24:43+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["autoevaluate/squad-sample"], "eval_info": {"task": "extractive_question_answering", "model": "autoevaluate/extractive-question-answering", "metrics": [], "dataset_name": "autoevaluate/squad-sample", "dataset_config": "autoevaluate--squad-sample", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-29T09:25:07+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: autoevaluate/extractive-question-answering * Dataset: autoevaluate/squad-sample * Config: autoevaluate--squad-sample * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: autoevaluate/squad-sample\n* Config: autoevaluate--squad-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: autoevaluate/squad-sample\n* Config: autoevaluate--squad-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 104, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: autoevaluate/extractive-question-answering\n* Dataset: autoevaluate/squad-sample\n* Config: autoevaluate--squad-sample\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
ec3c96f7624cc7b419297c51779b9800826a818c
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: mrm8488/deberta-v3-small-finetuned-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-f16e6c43-14015917
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:06:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "mrm8488/deberta-v3-small-finetuned-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-08-29T11:07:18+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: mrm8488/deberta-v3-small-finetuned-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: mrm8488/deberta-v3-small-finetuned-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: mrm8488/deberta-v3-small-finetuned-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 98, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: mrm8488/deberta-v3-small-finetuned-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
64dea239da2de88405fb3120dc26f511eaff7891
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: anindabitm/sagemaker-distilbert-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-emotion-af6a16fe-14025918
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:06:53+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "anindabitm/sagemaker-distilbert-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-29T11:07:19+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: anindabitm/sagemaker-distilbert-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: anindabitm/sagemaker-distilbert-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: anindabitm/sagemaker-distilbert-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 92, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: anindabitm/sagemaker-distilbert-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
3160df47c1c1eef5087fa86fb551b61adfe2f552
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-fanpage * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-afdf25d0-14035919
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:26:34+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "ARTeLab/it5-summarization-fanpage", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-08-29T11:27:41+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: ARTeLab/it5-summarization-fanpage * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-fanpage\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-fanpage\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: ARTeLab/it5-summarization-fanpage\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
a4895d7e5d6f96414fce19ef999a68f0adc509e9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Summarization * Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-samsum-afdf25d0-14035921
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:26:43+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}}
2022-08-29T11:29:49+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Summarization * Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch * Dataset: samsum * Config: samsum * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 107, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: Ameer05/bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-10-epoch\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
60630ce757b999088709d5d6816592c9b7fdbd89
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: Adrian/distilbert-base-uncased-finetuned-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-82949658-14045922
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:26:48+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Adrian/distilbert-base-uncased-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-29T11:29:21+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: Adrian/distilbert-base-uncased-finetuned-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Adrian/distilbert-base-uncased-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Adrian/distilbert-base-uncased-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 101, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Adrian/distilbert-base-uncased-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
f94df08f28998f2e61b9017f89692664e0530679
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Question Answering * Model: Aiyshwariya/bert-finetuned-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-squad_v2-82949658-14045923
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:27:22+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Aiyshwariya/bert-finetuned-squad", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}}
2022-08-29T11:30:22+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Question Answering * Model: Aiyshwariya/bert-finetuned-squad * Dataset: squad_v2 * Config: squad_v2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Aiyshwariya/bert-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Aiyshwariya/bert-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 96, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: Aiyshwariya/bert-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
c82c3e92c8ce1011435ff34246d830634d4f3ab3
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Translation * Model: Lvxue/finetuned-mt5-small-10epoch * Dataset: wmt16 * Config: de-en * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-wmt16-a5e2262a-14055924
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:27:26+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["wmt16"], "eval_info": {"task": "translation", "model": "Lvxue/finetuned-mt5-small-10epoch", "metrics": [], "dataset_name": "wmt16", "dataset_config": "de-en", "dataset_split": "test", "col_mapping": {"source": "translation.en", "target": "translation.de"}}}
2022-08-29T11:28:47+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Translation * Model: Lvxue/finetuned-mt5-small-10epoch * Dataset: wmt16 * Config: de-en * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: Lvxue/finetuned-mt5-small-10epoch\n* Dataset: wmt16\n* Config: de-en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: Lvxue/finetuned-mt5-small-10epoch\n* Dataset: wmt16\n* Config: de-en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 94, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: Lvxue/finetuned-mt5-small-10epoch\n* Dataset: wmt16\n* Config: de-en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
d04e8305a7b1fe40ced830c06b1b435aa0252f6a
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: mrm8488/deberta-v3-small-finetuned-cola * Dataset: glue * Config: cola * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-c88eb4d4-14065928
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:27:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "mrm8488/deberta-v3-small-finetuned-cola", "metrics": [], "dataset_name": "glue", "dataset_config": "cola", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-08-29T11:27:58+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Text Classification * Model: mrm8488/deberta-v3-small-finetuned-cola * Dataset: glue * Config: cola * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: mrm8488/deberta-v3-small-finetuned-cola\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: mrm8488/deberta-v3-small-finetuned-cola\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 96, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: mrm8488/deberta-v3-small-finetuned-cola\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
a45bcb2ef853109b882d5f6c7cb99c3bd54bb223
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: mrm8488/deberta-v3-large-finetuned-mnli * Dataset: glue * Config: mnli * Split: validation_matched To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-ca80bfc9-14105932
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:27:57+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "mrm8488/deberta-v3-large-finetuned-mnli", "metrics": [], "dataset_name": "glue", "dataset_config": "mnli", "dataset_split": "validation_matched", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}}
2022-08-29T11:31:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: mrm8488/deberta-v3-large-finetuned-mnli * Dataset: glue * Config: mnli * Split: validation_matched To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: mrm8488/deberta-v3-large-finetuned-mnli\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: mrm8488/deberta-v3-large-finetuned-mnli\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 102, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: mrm8488/deberta-v3-large-finetuned-mnli\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
81d0f6caa3ab9c6300a0bab43cfb0fdc10d53b05
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: mrm8488/deberta-v3-small-finetuned-qnli * Dataset: glue * Config: qnli * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-91d4fe29-14115933
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:28:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "mrm8488/deberta-v3-small-finetuned-qnli", "metrics": [], "dataset_name": "glue", "dataset_config": "qnli", "dataset_split": "validation", "col_mapping": {"text1": "question", "text2": "sentence", "target": "label"}}}
2022-08-29T11:28:56+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: mrm8488/deberta-v3-small-finetuned-qnli * Dataset: glue * Config: qnli * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: mrm8488/deberta-v3-small-finetuned-qnli\n* Dataset: glue\n* Config: qnli\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: mrm8488/deberta-v3-small-finetuned-qnli\n* Dataset: glue\n* Config: qnli\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 100, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: mrm8488/deberta-v3-small-finetuned-qnli\n* Dataset: glue\n* Config: qnli\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
ca26aa07d44b0cf23ae600e6fcf1690a0c2992c5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: Intel/roberta-base-mrpc * Dataset: glue * Config: mrpc * Split: train To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-f56b6c46-14085930
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:28:15+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "Intel/roberta-base-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "train", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-08-29T11:28:55+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: Intel/roberta-base-mrpc * Dataset: glue * Config: mrpc * Split: train To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/roberta-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/roberta-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/roberta-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
3034f92e343d8e9629ba792ece2bfbfb067a5181
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/roberta-base-qqp * Dataset: glue * Config: qqp * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-f1585abe-14095931
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:28:15+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/roberta-base-qqp", "metrics": [], "dataset_name": "glue", "dataset_config": "qqp", "dataset_split": "validation", "col_mapping": {"text1": "question1", "text2": "question2", "target": "label"}}}
2022-08-29T11:31:25+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/roberta-base-qqp * Dataset: glue * Config: qqp * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
a4c35f2ecd42cb2bfca9ea1cda04793fae25b6b9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Binary Text Classification * Model: mrm8488/deberta-v3-small-finetuned-sst2 * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-f6cacc01-14075929
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:28:20+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "mrm8488/deberta-v3-small-finetuned-sst2", "metrics": [], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}}
2022-08-29T11:28:52+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Binary Text Classification * Model: mrm8488/deberta-v3-small-finetuned-sst2 * Dataset: glue * Config: sst2 * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: mrm8488/deberta-v3-small-finetuned-sst2\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: mrm8488/deberta-v3-small-finetuned-sst2\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 100, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: mrm8488/deberta-v3-small-finetuned-sst2\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
2536141082d13670fa08230b1c7f2cd4c8ad43f1
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: Alireza1044/mobilebert_rte * Dataset: glue * Config: rte * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-67467c9c-14145936
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:29:17+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "Alireza1044/mobilebert_rte", "metrics": [], "dataset_name": "glue", "dataset_config": "rte", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-08-29T11:29:38+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: Alireza1044/mobilebert_rte * Dataset: glue * Config: rte * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Alireza1044/mobilebert_rte\n* Dataset: glue\n* Config: rte\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Alireza1044/mobilebert_rte\n* Dataset: glue\n* Config: rte\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Alireza1044/mobilebert_rte\n* Dataset: glue\n* Config: rte\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
b090c700a076dcf043522e5ddce467f6add05a67
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/roberta-base-rte * Dataset: glue * Config: rte * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-project-glue-67467c9c-14145935
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:29:29+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/roberta-base-rte", "metrics": [], "dataset_name": "glue", "dataset_config": "rte", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-08-29T11:30:00+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: JeremiahZ/roberta-base-rte * Dataset: glue * Config: rte * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-rte\n* Dataset: glue\n* Config: rte\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-rte\n* Dataset: glue\n* Config: rte\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 89, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-rte\n* Dataset: glue\n* Config: rte\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
269ed925eb51425013b692d0ac25ef66f51611d5
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: mrm8488/bert-mini-finetuned-age_news-classification * Dataset: ag_news * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-ag_news-default-684001-14155939
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T11:47:10+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["ag_news"], "eval_info": {"task": "multi_class_classification", "model": "mrm8488/bert-mini-finetuned-age_news-classification", "metrics": [], "dataset_name": "ag_news", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-29T11:47:36+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: mrm8488/bert-mini-finetuned-age_news-classification * Dataset: ag_news * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: mrm8488/bert-mini-finetuned-age_news-classification\n* Dataset: ag_news\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: mrm8488/bert-mini-finetuned-age_news-classification\n* Dataset: ag_news\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 97, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: mrm8488/bert-mini-finetuned-age_news-classification\n* Dataset: ag_news\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
01ca83ee3481af6129dca76258ee734f20013aa4
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: andi611/distilbert-base-uncased-qa-boolq * Dataset: boolq * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-boolq-default-049b58-14205948
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T13:36:04+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["boolq"], "eval_info": {"task": "natural_language_inference", "model": "andi611/distilbert-base-uncased-qa-boolq", "metrics": [], "dataset_name": "boolq", "dataset_config": "default", "dataset_split": "validation", "col_mapping": {"text1": "question", "text2": "passage", "target": "answer"}}}
2022-08-29T13:36:34+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: andi611/distilbert-base-uncased-qa-boolq * Dataset: boolq * Config: default * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: andi611/distilbert-base-uncased-qa-boolq\n* Dataset: boolq\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: andi611/distilbert-base-uncased-qa-boolq\n* Dataset: boolq\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 97, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: andi611/distilbert-base-uncased-qa-boolq\n* Dataset: boolq\n* Config: default\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
b509d87f11b98dee9d10d6f037479b98824e9fbe
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Multi-class Text Classification * Model: bergum/xtremedistil-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model.
autoevaluate/autoeval-staging-eval-emotion-default-63bd40-14245951
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T15:05:14+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "bergum/xtremedistil-emotion", "metrics": [], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}}
2022-08-29T15:05:35+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Multi-class Text Classification * Model: bergum/xtremedistil-emotion * Dataset: emotion * Config: default * Split: test To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @lewtun for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bergum/xtremedistil-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bergum/xtremedistil-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @lewtun for evaluating this model." ]
[ 13, 87, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bergum/xtremedistil-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model." ]
062592d41bbc04c0715c50f75184907f2adc70ca
# Dataset Card for blogspot raw dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset is a corpus of raw blogposts from [blogspot](https://blogger.com) mostly in the English language. It was obtained by scraping corpora of [webarchive](https://archive.org) and [commoncrawl](https://commoncrawl.org). ### Supported Tasks and Leaderboards The dataset may be used for training language models or serve other research interests. ### Languages Mostly English language, but some outliers may occur. ## Dataset Structure [Distribution](https://huggingface.co/datasets/mschi/blogspot_raw/blob/main/blospot_comm_dist.png) The distribution of the blog posts over time can be viewed at ./blogspot_dist_comm.png ### Data Instances [More Information Needed] ### Data Fields text: string URL: string date: string comment: int ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale The dataset was constructed by utilizing the [WARC-dl pipeline](https://github.com/webis-de/web-archive-keras). It was executed on cluster architecture. The corpora of archive.org and commoncrawl.org contain WARC files that contain HTML which gets parsed by the pipeline. The pipeline extracts HTML from the WARC files and applies distributed filtering to efficiently filter for the desired content. ### Source Data #### Initial Data Collection and Normalization The corpora "corpus-commoncrawl-main-2022-05" and "corpus-iwo-internet-archive-wide00001" have been searched for the content present in this dataset. Search terms have been inserted into the preciously mentioned pipeline to filter URLs for "blogspot.com" and characteristic timestamp information contained in the URL (e.g. "/01/2007"). The HTML documents were parsed for specific tags to obtain the timestamps. Further, the data was labeled with the "comment" label if there were some comment markers in the URL, indicating that the retrieved text is from the main text of a blog post or from the comments section. The texts are stored raw and no further processing has been done. #### Who are the source language producers? Since [blogspot](https://blogger.com) provides a high-level framework to allow people everywhere in the world to set up and maintain a blog, the producers of the texts may not be further specified. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information Texts are raw and unfiltered, thus personal and sensitive information, as well as explicit language, may be present in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases The retrieval of the timestamps from the HTML documents was not 100% accurate, so a small proportion of wrong or nonsense timestamps can be present in the data. Also we can not guarantee the correctness of the timestamps as well as the "comment" labels. ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators The dataset was constructed during the course "Big Data and Language Technologies" of the Text Mining and Retrieval Group, Department of Computer Science at the University of Leipzig. ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@jonaskonig](https://github.com/jonaskonig), [@maschirmer](https://github.com/maschirmer) and [@1BlattPapier](https://github.com/1BlattPapier) for contributing.
mschi/blogspot_raw
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_categories:text-generation", "task_categories:time-series-forecasting", "language_creators:other", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:en", "license:mit", "blogspot", "blogger", "texts", "region:us" ]
2022-08-29T17:19:04+00:00
{"annotations_creators": [], "language_creators": ["other"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-retrieval", "text-generation", "time-series-forecasting"], "task_ids": [], "pretty_name": "Blogspot_raw_texts", "tags": ["blogspot", "blogger", "texts"]}
2022-09-13T07:48:23+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_categories-text-retrieval #task_categories-text-generation #task_categories-time-series-forecasting #language_creators-other #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-mit #blogspot #blogger #texts #region-us
# Dataset Card for blogspot raw dataset ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset is a corpus of raw blogposts from blogspot mostly in the English language. It was obtained by scraping corpora of webarchive and commoncrawl. ### Supported Tasks and Leaderboards The dataset may be used for training language models or serve other research interests. ### Languages Mostly English language, but some outliers may occur. ## Dataset Structure Distribution The distribution of the blog posts over time can be viewed at ./blogspot_dist_comm.png ### Data Instances ### Data Fields text: string URL: string date: string comment: int ### Data Splits ## Dataset Creation ### Curation Rationale The dataset was constructed by utilizing the WARC-dl pipeline. It was executed on cluster architecture. The corpora of URL and URL contain WARC files that contain HTML which gets parsed by the pipeline. The pipeline extracts HTML from the WARC files and applies distributed filtering to efficiently filter for the desired content. ### Source Data #### Initial Data Collection and Normalization The corpora "corpus-commoncrawl-main-2022-05" and "corpus-iwo-internet-archive-wide00001" have been searched for the content present in this dataset. Search terms have been inserted into the preciously mentioned pipeline to filter URLs for "URL" and characteristic timestamp information contained in the URL (e.g. "/01/2007"). The HTML documents were parsed for specific tags to obtain the timestamps. Further, the data was labeled with the "comment" label if there were some comment markers in the URL, indicating that the retrieved text is from the main text of a blog post or from the comments section. The texts are stored raw and no further processing has been done. #### Who are the source language producers? Since blogspot provides a high-level framework to allow people everywhere in the world to set up and maintain a blog, the producers of the texts may not be further specified. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Texts are raw and unfiltered, thus personal and sensitive information, as well as explicit language, may be present in the dataset. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases The retrieval of the timestamps from the HTML documents was not 100% accurate, so a small proportion of wrong or nonsense timestamps can be present in the data. Also we can not guarantee the correctness of the timestamps as well as the "comment" labels. ### Other Known Limitations ## Additional Information ### Dataset Curators The dataset was constructed during the course "Big Data and Language Technologies" of the Text Mining and Retrieval Group, Department of Computer Science at the University of Leipzig. ### Licensing Information ### Contributions Thanks to @jonaskonig, @maschirmer and @1BlattPapier for contributing.
[ "# Dataset Card for blogspot raw dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis dataset is a corpus of raw blogposts from blogspot mostly in the English language. It was obtained by scraping corpora of webarchive and commoncrawl.", "### Supported Tasks and Leaderboards\n\nThe dataset may be used for training language models or serve other research interests.", "### Languages\n\nMostly English language, but some outliers may occur.", "## Dataset Structure\n\nDistribution\n\nThe distribution of the blog posts over time can be viewed at ./blogspot_dist_comm.png", "### Data Instances", "### Data Fields\n\n text: string\n URL: string\n date: string\n comment: int", "### Data Splits", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was constructed by utilizing the WARC-dl pipeline. It was executed on cluster architecture. The corpora of URL and URL contain WARC files that contain HTML which gets parsed by the pipeline. The pipeline extracts HTML from the WARC files and applies distributed filtering to efficiently filter for the desired content.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe corpora \"corpus-commoncrawl-main-2022-05\" and \"corpus-iwo-internet-archive-wide00001\" have been searched for the content present in this dataset.\nSearch terms have been inserted into the preciously mentioned pipeline to filter URLs for \"URL\" and characteristic timestamp information contained in the URL (e.g. \"/01/2007\"). The HTML documents were parsed for specific tags to obtain the timestamps. Further, the data was labeled with the \"comment\" label if there were some comment markers in the URL, indicating that the retrieved text is from the main text of a blog post or from the comments section. The texts are stored raw and no further processing has been done.", "#### Who are the source language producers?\n\nSince blogspot provides a high-level framework to allow people everywhere in the world to set up and maintain a blog, the producers of the texts may not be further specified.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nTexts are raw and unfiltered, thus personal and sensitive information, as well as explicit language, may be present in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases\n\nThe retrieval of the timestamps from the HTML documents was not 100% accurate, so a small proportion of wrong or nonsense timestamps can be present in the data. Also we can not guarantee the correctness of the timestamps as well as the \"comment\" labels.", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset was constructed during the course \"Big Data and Language Technologies\" of the Text Mining and Retrieval Group, Department of Computer Science at the University of Leipzig.", "### Licensing Information", "### Contributions\n\nThanks to @jonaskonig, @maschirmer and @1BlattPapier for contributing." ]
[ "TAGS\n#task_categories-text-classification #task_categories-text-retrieval #task_categories-text-generation #task_categories-time-series-forecasting #language_creators-other #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-mit #blogspot #blogger #texts #region-us \n", "# Dataset Card for blogspot raw dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis dataset is a corpus of raw blogposts from blogspot mostly in the English language. It was obtained by scraping corpora of webarchive and commoncrawl.", "### Supported Tasks and Leaderboards\n\nThe dataset may be used for training language models or serve other research interests.", "### Languages\n\nMostly English language, but some outliers may occur.", "## Dataset Structure\n\nDistribution\n\nThe distribution of the blog posts over time can be viewed at ./blogspot_dist_comm.png", "### Data Instances", "### Data Fields\n\n text: string\n URL: string\n date: string\n comment: int", "### Data Splits", "## Dataset Creation", "### Curation Rationale\n\nThe dataset was constructed by utilizing the WARC-dl pipeline. It was executed on cluster architecture. The corpora of URL and URL contain WARC files that contain HTML which gets parsed by the pipeline. The pipeline extracts HTML from the WARC files and applies distributed filtering to efficiently filter for the desired content.", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe corpora \"corpus-commoncrawl-main-2022-05\" and \"corpus-iwo-internet-archive-wide00001\" have been searched for the content present in this dataset.\nSearch terms have been inserted into the preciously mentioned pipeline to filter URLs for \"URL\" and characteristic timestamp information contained in the URL (e.g. \"/01/2007\"). The HTML documents were parsed for specific tags to obtain the timestamps. Further, the data was labeled with the \"comment\" label if there were some comment markers in the URL, indicating that the retrieved text is from the main text of a blog post or from the comments section. The texts are stored raw and no further processing has been done.", "#### Who are the source language producers?\n\nSince blogspot provides a high-level framework to allow people everywhere in the world to set up and maintain a blog, the producers of the texts may not be further specified.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nTexts are raw and unfiltered, thus personal and sensitive information, as well as explicit language, may be present in the dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases\n\nThe retrieval of the timestamps from the HTML documents was not 100% accurate, so a small proportion of wrong or nonsense timestamps can be present in the data. Also we can not guarantee the correctness of the timestamps as well as the \"comment\" labels.", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nThe dataset was constructed during the course \"Big Data and Language Technologies\" of the Text Mining and Retrieval Group, Department of Computer Science at the University of Leipzig.", "### Licensing Information", "### Contributions\n\nThanks to @jonaskonig, @maschirmer and @1BlattPapier for contributing." ]
[ 107, 10, 125, 24, 42, 27, 17, 32, 6, 18, 5, 5, 83, 4, 179, 48, 5, 5, 9, 38, 8, 7, 69, 7, 5, 44, 6, 28 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-text-retrieval #task_categories-text-generation #task_categories-time-series-forecasting #language_creators-other #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-mit #blogspot #blogger #texts #region-us \n# Dataset Card for blogspot raw dataset## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThis dataset is a corpus of raw blogposts from blogspot mostly in the English language. It was obtained by scraping corpora of webarchive and commoncrawl.### Supported Tasks and Leaderboards\n\nThe dataset may be used for training language models or serve other research interests.### Languages\n\nMostly English language, but some outliers may occur.## Dataset Structure\n\nDistribution\n\nThe distribution of the blog posts over time can be viewed at ./blogspot_dist_comm.png### Data Instances### Data Fields\n\n text: string\n URL: string\n date: string\n comment: int### Data Splits## Dataset Creation### Curation Rationale\n\nThe dataset was constructed by utilizing the WARC-dl pipeline. It was executed on cluster architecture. The corpora of URL and URL contain WARC files that contain HTML which gets parsed by the pipeline. The pipeline extracts HTML from the WARC files and applies distributed filtering to efficiently filter for the desired content.### Source Data" ]
c8b72f8c242a0d8e052de3041c50c5a5e8f2a38e
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: anli * Config: plain_text * Split: test_r3 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@MoritzLaurer](https://huggingface.co/MoritzLaurer) for evaluating this model.
autoevaluate/autoeval-staging-eval-anli-plain_text-c507f2-14355972
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T19:24:33+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["anli"], "eval_info": {"task": "natural_language_inference", "model": "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli", "metrics": [], "dataset_name": "anli", "dataset_config": "plain_text", "dataset_split": "test_r3", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}}
2022-08-29T19:25:09+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: anli * Config: plain_text * Split: test_r3 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @MoritzLaurer for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: test_r3\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: test_r3\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ 13, 104, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: test_r3\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
0aa1d1e2793c68feafc3ea0267ffbdbb6e145bd2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: anli * Config: plain_text * Split: test_r2 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@MoritzLaurer](https://huggingface.co/MoritzLaurer) for evaluating this model.
autoevaluate/autoeval-staging-eval-anli-plain_text-1f482c-14395973
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T19:37:08+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["anli"], "eval_info": {"task": "natural_language_inference", "model": "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli", "metrics": [], "dataset_name": "anli", "dataset_config": "plain_text", "dataset_split": "test_r2", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}}
2022-08-29T19:37:44+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: anli * Config: plain_text * Split: test_r2 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @MoritzLaurer for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: test_r2\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: test_r2\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ 13, 104, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: test_r2\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
53ffa6b0c5abc115794bc3ac6d4524487cf12499
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: anli * Config: plain_text * Split: test_r1 To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@MoritzLaurer](https://huggingface.co/MoritzLaurer) for evaluating this model.
autoevaluate/autoeval-staging-eval-anli-plain_text-dfb10f-14405974
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T19:37:13+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["anli"], "eval_info": {"task": "natural_language_inference", "model": "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli", "metrics": [], "dataset_name": "anli", "dataset_config": "plain_text", "dataset_split": "test_r1", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}}
2022-08-29T19:37:45+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: anli * Config: plain_text * Split: test_r1 To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @MoritzLaurer for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: test_r1\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: test_r1\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ 13, 104, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: test_r1\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
2b52953aaf495435ed9e0a4beeaf3190b7149f09
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: multi_nli * Config: default * Split: validation_matched To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@MoritzLaurer](https://huggingface.co/MoritzLaurer) for evaluating this model.
autoevaluate/autoeval-staging-eval-multi_nli-default-68c6a6-14415975
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T19:49:38+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["multi_nli"], "eval_info": {"task": "natural_language_inference", "model": "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli", "metrics": [], "dataset_name": "multi_nli", "dataset_config": "default", "dataset_split": "validation_matched", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}}
2022-08-29T19:51:17+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: multi_nli * Config: default * Split: validation_matched To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @MoritzLaurer for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: multi_nli\n* Config: default\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: multi_nli\n* Config: default\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ 13, 105, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: multi_nli\n* Config: default\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
b6aeba317590bd7a8fb11ba1d41bbcb1788dd388
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: multi_nli * Config: default * Split: validation_mismatched To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@MoritzLaurer](https://huggingface.co/MoritzLaurer) for evaluating this model.
autoevaluate/autoeval-staging-eval-multi_nli-default-4a02ee-14425976
[ "autotrain", "evaluation", "region:us" ]
2022-08-29T19:49:40+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["multi_nli"], "eval_info": {"task": "natural_language_inference", "model": "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli", "metrics": [], "dataset_name": "multi_nli", "dataset_config": "default", "dataset_split": "validation_mismatched", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}}
2022-08-29T19:51:17+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli * Dataset: multi_nli * Config: default * Split: validation_mismatched To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @MoritzLaurer for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: multi_nli\n* Config: default\n* Split: validation_mismatched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: multi_nli\n* Config: default\n* Split: validation_mismatched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
[ 13, 106, 18 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: multi_nli\n* Config: default\n* Split: validation_mismatched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @MoritzLaurer for evaluating this model." ]
4d80aed9505bdcfd4f7bfa577c66467fb71db4c2
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: Intel/roberta-base-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xinhe](https://huggingface.co/xinhe) for evaluating this model.
autoevaluate/autoeval-staging-eval-glue-mrpc-4a87ed-14445977
[ "autotrain", "evaluation", "region:us" ]
2022-08-30T01:39:34+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "Intel/roberta-base-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-08-30T01:40:01+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: Intel/roberta-base-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @xinhe for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/roberta-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @xinhe for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/roberta-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @xinhe for evaluating this model." ]
[ 13, 88, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/roberta-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @xinhe for evaluating this model." ]
19ab9a4e0a4ad3dce1adbc4f0e6595d7c9ebc0d9
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset: * Task: Natural Language Inference * Model: Intel/bert-base-uncased-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator). ## Contributions Thanks to [@xinhe](https://huggingface.co/xinhe) for evaluating this model.
autoevaluate/autoeval-staging-eval-glue-mrpc-71a11b-14455978
[ "autotrain", "evaluation", "region:us" ]
2022-08-30T01:39:36+00:00
{"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "Intel/bert-base-uncased-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}}
2022-08-30T01:40:01+00:00
[]
[]
TAGS #autotrain #evaluation #region-us
# Dataset Card for AutoTrain Evaluator This repository contains model predictions generated by AutoTrain for the following task and dataset: * Task: Natural Language Inference * Model: Intel/bert-base-uncased-mrpc * Dataset: glue * Config: mrpc * Split: validation To run new evaluation jobs, visit Hugging Face's automatic model evaluator. ## Contributions Thanks to @xinhe for evaluating this model.
[ "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/bert-base-uncased-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @xinhe for evaluating this model." ]
[ "TAGS\n#autotrain #evaluation #region-us \n", "# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/bert-base-uncased-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.", "## Contributions\n\nThanks to @xinhe for evaluating this model." ]
[ 13, 91, 15 ]
[ "passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/bert-base-uncased-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @xinhe for evaluating this model." ]
64a374c848cde26885e77f50fa7de87d58697d5d
# national_library_of_korea_book_info ## Dataset Description - **Homepage** [๋ฌธํ™” ๋น…๋ฐ์ดํ„ฐ ํ”Œ๋žซํผ](https://www.culture.go.kr/bigdata/user/data_market/detail.do?id=63513d7b-9b87-4ec1-a398-0a18ecc45411) - **Download Size** 759 MB - **Generated Size** 2.33 GB - **Total Size** 3.09 GB ๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ฐฐํฌํ•œ, ๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ณด๊ด€์ค‘์ธ ๋„์„œ ์ •๋ณด์— ๊ด€ํ•œ ๋ฐ์ดํ„ฐ. ### License other ([KOGL](https://www.kogl.or.kr/info/license.do#05-tab) (Korea Open Government License) Type-1) ![KOGL_image](https://www.kogl.or.kr/images/front/sub/img_opencode1_m_en.jpg) - According to above KOGL, user can use public works freely and without fee regardless of its commercial use, and can change or modify to create secondary works when user complies with the terms provided as follows: <details> <summary>KOGL Type 1</summary> 1. Source Indication Liability - Users who use public works shall indicate source or copyright as follows: - EX : โ€œ000(public institution's name)'s public work is used according to KOGLโ€ - The link shall be provided when online hyperlink for the source website is available. - Marking shall not be used to misguide the third party that the user is sponsored by public institution or user has a special relationship with public institutions. 2. Use Prohibited Information - Personal information that is protected by Personal Information Protection Act, Promotion for Information Network Use and Information Protection Act, etc. - Credit information protected by the Use and Protection of Credit Information Act, etc. - Military secrets protected by Military Secret Protection Act, etc. - Information that is the object of other rights such as trademark right, design right, design right or patent right, etc., or that is owned by third party's copyright. - Other information that is use prohibited information according to other laws. 3. Public Institution's Liability Exemption - Public institution does not guarantee the accuracy or continued service of public works. - Public institution and its employees do not have any liability for any kind of damage or disadvantage that may arise by using public works. 4. Effect of Use Term Violation - The use permission is automatically terminated when user violates any of the KOGL's Use Terms, and the user shall immediately stop using public works. </details> ## Data Structure ### Data Instance ```python >>> from datasets import load_dataset >>> >>> ds = load_dataset("Bingsu/national_library_of_korea_book_info", split="train") >>> ds Dataset({ features: ['isbn13', 'vol', 'title', 'author', 'publisher', 'price', 'img_url', 'description'], num_rows: 7919278 }) ``` ```python >>> ds.features {'isbn13': Value(dtype='string', id=None), 'vol': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'author': Value(dtype='string', id=None), 'publisher': Value(dtype='string', id=None), 'price': Value(dtype='string', id=None), 'img_url': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None)} ``` or ```python >>> import pandas as pd >>> >>> url = "https://huggingface.co/datasets/Bingsu/national_library_of_korea_book_info/resolve/main/train.csv.gz" >>> df = pd.read_csv(url, low_memory=False) ``` ```python >>> df.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 7919278 entries, 0 to 7919277 Data columns (total 8 columns): # Column Dtype --- ------ ----- 0 isbn13 object 1 vol object 2 title object 3 author object 4 publisher object 5 price object 6 img_url object 7 description object dtypes: object(8) memory usage: 483.4+ MB ``` ### Null data ```python >>> df.isnull().sum() isbn13 3277 vol 5933882 title 19662 author 122998 publisher 1007553 price 3096535 img_url 3182882 description 4496194 dtype: int64 ``` ### Note ```python >>> df[df["description"].str.contains("[ํ•ด์™ธ์ฃผ๋ฌธ์›์„œ]", regex=False) == True].head()["description"] 10773 [ํ•ด์™ธ์ฃผ๋ฌธ์›์„œ] ๊ณ ๊ฐ๋‹˜์˜ ์š”์ฒญ์œผ๋กœ ์ˆ˜์ž… ์ฃผ๋ฌธํ•˜๋Š” ๋„์„œ์ด๋ฏ€๋กœ, ์ฃผ๋ฌธ์ทจ์†Œ ๋ฐ ๋ฐ˜ํ’ˆ์ด ๋ถˆ... 95542 [ํ•ด์™ธ์ฃผ๋ฌธ์›์„œ] ๊ณ ๊ฐ๋‹˜์˜ ์š”์ฒญ์œผ๋กœ ์ˆ˜์ž… ์ฃผ๋ฌธํ•˜๋Š” ๋„์„œ์ด๋ฏ€๋กœ, ์ฃผ๋ฌธ์ทจ์†Œ ๋ฐ ๋ฐ˜ํ’ˆ์ด ๋ถˆ... 95543 [ํ•ด์™ธ์ฃผ๋ฌธ์›์„œ] ๊ณ ๊ฐ๋‹˜์˜ ์š”์ฒญ์œผ๋กœ ์ˆ˜์ž… ์ฃผ๋ฌธํ•˜๋Š” ๋„์„œ์ด๋ฏ€๋กœ, ์ฃผ๋ฌธ์ทจ์†Œ ๋ฐ ๋ฐ˜ํ’ˆ์ด ๋ถˆ... 96606 [ํ•ด์™ธ์ฃผ๋ฌธ์›์„œ] ๊ณ ๊ฐ๋‹˜์˜ ์š”์ฒญ์œผ๋กœ ์ˆ˜์ž… ์ฃผ๋ฌธํ•˜๋Š” ๋„์„œ์ด๋ฏ€๋กœ, ์ฃผ๋ฌธ์ทจ์†Œ ๋ฐ ๋ฐ˜ํ’ˆ์ด ๋ถˆ... 96678 [ํ•ด์™ธ์ฃผ๋ฌธ์›์„œ] ๊ณ ๊ฐ๋‹˜์˜ ์š”์ฒญ์œผ๋กœ ์ˆ˜์ž… ์ฃผ๋ฌธํ•˜๋Š” ๋„์„œ์ด๋ฏ€๋กœ, ์ฃผ๋ฌธ์ทจ์†Œ ๋ฐ ๋ฐ˜ํ’ˆ์ด ๋ถˆ... Name: description, dtype: object ```
Bingsu/national_library_of_korea_book_info
[ "multilinguality:monolingual", "size_categories:1M<n<10M", "language:ko", "license:other", "region:us" ]
2022-08-30T04:48:26+00:00
{"language": ["ko"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "pretty_name": "national_library_of_korea_book_info"}
2022-08-30T07:32:14+00:00
[]
[ "ko" ]
TAGS #multilinguality-monolingual #size_categories-1M<n<10M #language-Korean #license-other #region-us
# national_library_of_korea_book_info ## Dataset Description - Homepage ๋ฌธํ™” ๋น…๋ฐ์ดํ„ฐ ํ”Œ๋žซํผ - Download Size 759 MB - Generated Size 2.33 GB - Total Size 3.09 GB ๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ฐฐํฌํ•œ, ๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ณด๊ด€์ค‘์ธ ๋„์„œ ์ •๋ณด์— ๊ด€ํ•œ ๋ฐ์ดํ„ฐ. ### License other (KOGL (Korea Open Government License) Type-1) !KOGL_image - According to above KOGL, user can use public works freely and without fee regardless of its commercial use, and can change or modify to create secondary works when user complies with the terms provided as follows: <details> <summary>KOGL Type 1</summary> 1. Source Indication Liability - Users who use public works shall indicate source or copyright as follows: - EX : โ€œ000(public institution's name)'s public work is used according to KOGLโ€ - The link shall be provided when online hyperlink for the source website is available. - Marking shall not be used to misguide the third party that the user is sponsored by public institution or user has a special relationship with public institutions. 2. Use Prohibited Information - Personal information that is protected by Personal Information Protection Act, Promotion for Information Network Use and Information Protection Act, etc. - Credit information protected by the Use and Protection of Credit Information Act, etc. - Military secrets protected by Military Secret Protection Act, etc. - Information that is the object of other rights such as trademark right, design right, design right or patent right, etc., or that is owned by third party's copyright. - Other information that is use prohibited information according to other laws. 3. Public Institution's Liability Exemption - Public institution does not guarantee the accuracy or continued service of public works. - Public institution and its employees do not have any liability for any kind of damage or disadvantage that may arise by using public works. 4. Effect of Use Term Violation - The use permission is automatically terminated when user violates any of the KOGL's Use Terms, and the user shall immediately stop using public works. </details> ## Data Structure ### Data Instance or ### Null data ### Note
[ "# national_library_of_korea_book_info", "## Dataset Description\n- Homepage ๋ฌธํ™” ๋น…๋ฐ์ดํ„ฐ ํ”Œ๋žซํผ\n- Download Size 759 MB\n- Generated Size 2.33 GB\n- Total Size 3.09 GB\n\n๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ฐฐํฌํ•œ, ๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ณด๊ด€์ค‘์ธ ๋„์„œ ์ •๋ณด์— ๊ด€ํ•œ ๋ฐ์ดํ„ฐ.", "### License\n\nother (KOGL (Korea Open Government License) Type-1)\n\n!KOGL_image\n\n- According to above KOGL, user can use public works freely and without fee regardless of its commercial use, and can change or modify to create secondary works when user complies with the terms provided as follows:\n\n<details>\n<summary>KOGL Type 1</summary>\n\n1. Source Indication Liability\n\n- Users who use public works shall indicate source or copyright as follows:\n- EX : โ€œ000(public institution's name)'s public work is used according to KOGLโ€\n- The link shall be provided when online hyperlink for the source website is available.\n- Marking shall not be used to misguide the third party that the user is sponsored by public institution or user has a special relationship with public institutions.\n\n2. Use Prohibited Information\n\n- Personal information that is protected by Personal Information Protection Act, Promotion for Information Network Use and Information Protection Act, etc.\n- Credit information protected by the Use and Protection of Credit Information Act, etc.\n- Military secrets protected by Military Secret Protection Act, etc.\n- Information that is the object of other rights such as trademark right, design right, design right or patent right, etc., or that is owned by third party's copyright.\n- Other information that is use prohibited information according to other laws.\n\n3. Public Institution's Liability Exemption\n\n- Public institution does not guarantee the accuracy or continued service of public works.\n- Public institution and its employees do not have any liability for any kind of damage or disadvantage that may arise by using public works.\n\n4. Effect of Use Term Violation\n\n- The use permission is automatically terminated when user violates any of the KOGL's Use Terms, and the user shall immediately stop using public works.\n\n</details>", "## Data Structure", "### Data Instance\n\n\n\n\n\nor", "### Null data", "### Note" ]
[ "TAGS\n#multilinguality-monolingual #size_categories-1M<n<10M #language-Korean #license-other #region-us \n", "# national_library_of_korea_book_info", "## Dataset Description\n- Homepage ๋ฌธํ™” ๋น…๋ฐ์ดํ„ฐ ํ”Œ๋žซํผ\n- Download Size 759 MB\n- Generated Size 2.33 GB\n- Total Size 3.09 GB\n\n๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ฐฐํฌํ•œ, ๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ณด๊ด€์ค‘์ธ ๋„์„œ ์ •๋ณด์— ๊ด€ํ•œ ๋ฐ์ดํ„ฐ.", "### License\n\nother (KOGL (Korea Open Government License) Type-1)\n\n!KOGL_image\n\n- According to above KOGL, user can use public works freely and without fee regardless of its commercial use, and can change or modify to create secondary works when user complies with the terms provided as follows:\n\n<details>\n<summary>KOGL Type 1</summary>\n\n1. Source Indication Liability\n\n- Users who use public works shall indicate source or copyright as follows:\n- EX : โ€œ000(public institution's name)'s public work is used according to KOGLโ€\n- The link shall be provided when online hyperlink for the source website is available.\n- Marking shall not be used to misguide the third party that the user is sponsored by public institution or user has a special relationship with public institutions.\n\n2. Use Prohibited Information\n\n- Personal information that is protected by Personal Information Protection Act, Promotion for Information Network Use and Information Protection Act, etc.\n- Credit information protected by the Use and Protection of Credit Information Act, etc.\n- Military secrets protected by Military Secret Protection Act, etc.\n- Information that is the object of other rights such as trademark right, design right, design right or patent right, etc., or that is owned by third party's copyright.\n- Other information that is use prohibited information according to other laws.\n\n3. Public Institution's Liability Exemption\n\n- Public institution does not guarantee the accuracy or continued service of public works.\n- Public institution and its employees do not have any liability for any kind of damage or disadvantage that may arise by using public works.\n\n4. Effect of Use Term Violation\n\n- The use permission is automatically terminated when user violates any of the KOGL's Use Terms, and the user shall immediately stop using public works.\n\n</details>", "## Data Structure", "### Data Instance\n\n\n\n\n\nor", "### Null data", "### Note" ]
[ 36, 14, 53, 395, 5, 6, 5, 3 ]
[ "passage: TAGS\n#multilinguality-monolingual #size_categories-1M<n<10M #language-Korean #license-other #region-us \n# national_library_of_korea_book_info## Dataset Description\n- Homepage ๋ฌธํ™” ๋น…๋ฐ์ดํ„ฐ ํ”Œ๋žซํผ\n- Download Size 759 MB\n- Generated Size 2.33 GB\n- Total Size 3.09 GB\n\n๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ฐฐํฌํ•œ, ๊ตญ๋ฆฝ์ค‘์•™๋„์„œ๊ด€์—์„œ ๋ณด๊ด€์ค‘์ธ ๋„์„œ ์ •๋ณด์— ๊ด€ํ•œ ๋ฐ์ดํ„ฐ.### License\n\nother (KOGL (Korea Open Government License) Type-1)\n\n!KOGL_image\n\n- According to above KOGL, user can use public works freely and without fee regardless of its commercial use, and can change or modify to create secondary works when user complies with the terms provided as follows:\n\n<details>\n<summary>KOGL Type 1</summary>\n\n1. Source Indication Liability\n\n- Users who use public works shall indicate source or copyright as follows:\n- EX : โ€œ000(public institution's name)'s public work is used according to KOGLโ€\n- The link shall be provided when online hyperlink for the source website is available.\n- Marking shall not be used to misguide the third party that the user is sponsored by public institution or user has a special relationship with public institutions.\n\n2. Use Prohibited Information\n\n- Personal information that is protected by Personal Information Protection Act, Promotion for Information Network Use and Information Protection Act, etc.\n- Credit information protected by the Use and Protection of Credit Information Act, etc.\n- Military secrets protected by Military Secret Protection Act, etc.\n- Information that is the object of other rights such as trademark right, design right, design right or patent right, etc., or that is owned by third party's copyright.\n- Other information that is use prohibited information according to other laws.\n\n3. Public Institution's Liability Exemption\n\n- Public institution does not guarantee the accuracy or continued service of public works.\n- Public institution and its employees do not have any liability for any kind of damage or disadvantage that may arise by using public works.\n\n4. Effect of Use Term Violation\n\n- The use permission is automatically terminated when user violates any of the KOGL's Use Terms, and the user shall immediately stop using public works.\n\n</details>## Data Structure" ]