sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
b1743a3eb280777e999ff98f0c9f00361b4042b2 | # Dataset Card for "gal_yair_large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | galman33/gal_yair_83000_1664x832 | [
"region:us"
] | 2022-11-05T21:36:49+00:00 | {"dataset_info": {"features": [{"name": "lat", "dtype": "float64"}, {"name": "lon", "dtype": "float64"}, {"name": "country_code", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 12963511218.0, "num_examples": 83000}], "download_size": 14150729267, "dataset_size": 12963511218.0}} | 2022-11-07T16:16:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "gal_yair_large"
More Information needed | [
"# Dataset Card for \"gal_yair_large\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"gal_yair_large\"\n\nMore Information needed"
] |
c0179e1d7304760d33b8fe4985288ea6d025eea2 | # Dataset Card for "adj-n0ed8tdx-800-150-3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ebeaulac/adj-n0ed8tdx-800-150-3 | [
"region:us"
] | 2022-11-05T23:38:03+00:00 | {"dataset_info": {"features": [{"name": "matrix", "sequence": {"sequence": "float64"}}, {"name": "is_adjacent", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 55909792, "num_examples": 1600}, {"name": "valid", "num_bytes": 10444854, "num_examples": 300}], "download_size": 48159452, "dataset_size": 66354646}} | 2022-11-05T23:38:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "adj-n0ed8tdx-800-150-3"
More Information needed | [
"# Dataset Card for \"adj-n0ed8tdx-800-150-3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"adj-n0ed8tdx-800-150-3\"\n\nMore Information needed"
] |
be5ccd50c1a5b6a629bfeead07d335977b77096a | # Dataset Card for "adj-n0ed8tdx-800-150-10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ebeaulac/adj-n0ed8tdx-800-150-10 | [
"region:us"
] | 2022-11-06T00:08:53+00:00 | {"dataset_info": {"features": [{"name": "matrix", "sequence": {"sequence": "float64"}}, {"name": "is_adjacent", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 5311464, "num_examples": 1600}, {"name": "valid", "num_bytes": 993502, "num_examples": 300}], "download_size": 4985370, "dataset_size": 6304966}} | 2022-11-06T00:09:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "adj-n0ed8tdx-800-150-10"
More Information needed | [
"# Dataset Card for \"adj-n0ed8tdx-800-150-10\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"adj-n0ed8tdx-800-150-10\"\n\nMore Information needed"
] |
decfdcc57efa83466449ccaa658ad431a8a416d4 |
# Dataset Summary
20M Vietnamese PubMed biomedical abstracts translated by the [state-of-the-art English-Vietnamese Translation project](https://arxiv.org/abs/2210.05610). The data has been used as unlabeled dataset for [pretraining a Vietnamese Biomedical-domain Transformer model](https://arxiv.org/abs/2210.05598).

image source: [Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation](https://arxiv.org/abs/2210.05598)
# Language
- English: Original biomedical abstracts from [Pubmed](https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html)
- Vietnamese: Synthetic abstract translated by a [state-of-the-art English-Vietnamese Translation project](https://arxiv.org/abs/2210.05610)
# Dataset Structure
- The English sequences are
- The Vietnamese sequences are
# Source Data - Initial Data Collection and Normalization
https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html
# Licensing Information
[Courtesy of the U.S. National Library of Medicine.](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html)
# Citation
```
@misc{mtet,
doi = {10.48550/ARXIV.2210.05610},
url = {https://arxiv.org/abs/2210.05610},
author = {Ngo, Chinh and Trinh, Trieu H. and Phan, Long and Tran, Hieu and Dang, Tai and Nguyen, Hieu and Nguyen, Minh and Luong, Minh-Thang},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {MTet: Multi-domain Translation for English and Vietnamese},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
```
@misc{vipubmed,
doi = {10.48550/ARXIV.2210.05598},
url = {https://arxiv.org/abs/2210.05598},
author = {Phan, Long and Dang, Tai and Tran, Hieu and Phan, Vy and Chau, Lam D. and Trinh, Trieu H.},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
``` | VietAI/vi_pubmed | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"language:vi",
"language:en",
"license:cc",
"arxiv:2210.05610",
"arxiv:2210.05598",
"region:us"
] | 2022-11-06T01:36:50+00:00 | {"language": ["vi", "en"], "license": "cc", "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "paperswithcode_id": "pubmed", "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "vi", "dtype": "string"}], "splits": [{"name": "pubmed22", "num_bytes": 44360028980, "num_examples": 20087006}], "download_size": 23041004247, "dataset_size": 44360028980}} | 2024-01-09T10:03:00+00:00 | [
"2210.05610",
"2210.05598"
] | [
"vi",
"en"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #language-Vietnamese #language-English #license-cc #arxiv-2210.05610 #arxiv-2210.05598 #region-us
|
# Dataset Summary
20M Vietnamese PubMed biomedical abstracts translated by the state-of-the-art English-Vietnamese Translation project. The data has been used as unlabeled dataset for pretraining a Vietnamese Biomedical-domain Transformer model.
!image
image source: Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation
# Language
- English: Original biomedical abstracts from Pubmed
- Vietnamese: Synthetic abstract translated by a state-of-the-art English-Vietnamese Translation project
# Dataset Structure
- The English sequences are
- The Vietnamese sequences are
# Source Data - Initial Data Collection and Normalization
URL
# Licensing Information
Courtesy of the U.S. National Library of Medicine.
| [
"# Dataset Summary\n20M Vietnamese PubMed biomedical abstracts translated by the state-of-the-art English-Vietnamese Translation project. The data has been used as unlabeled dataset for pretraining a Vietnamese Biomedical-domain Transformer model.\n\n!image\n\nimage source: Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation",
"# Language\n- English: Original biomedical abstracts from Pubmed\n- Vietnamese: Synthetic abstract translated by a state-of-the-art English-Vietnamese Translation project",
"# Dataset Structure\n- The English sequences are \n- The Vietnamese sequences are",
"# Source Data - Initial Data Collection and Normalization\nURL",
"# Licensing Information\nCourtesy of the U.S. National Library of Medicine."
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #language-Vietnamese #language-English #license-cc #arxiv-2210.05610 #arxiv-2210.05598 #region-us \n",
"# Dataset Summary\n20M Vietnamese PubMed biomedical abstracts translated by the state-of-the-art English-Vietnamese Translation project. The data has been used as unlabeled dataset for pretraining a Vietnamese Biomedical-domain Transformer model.\n\n!image\n\nimage source: Enriching Biomedical Knowledge for Vietnamese Low-resource Language Through Large-Scale Translation",
"# Language\n- English: Original biomedical abstracts from Pubmed\n- Vietnamese: Synthetic abstract translated by a state-of-the-art English-Vietnamese Translation project",
"# Dataset Structure\n- The English sequences are \n- The Vietnamese sequences are",
"# Source Data - Initial Data Collection and Normalization\nURL",
"# Licensing Information\nCourtesy of the U.S. National Library of Medicine."
] |
2dc0655925b2c848b6c86b68ba6ebad82bfec491 | # Dataset Card for PubMed
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline.html]()
- **Documentation:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline_documentation.html]()
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
- English
## Dataset Structure
Bear in mind the data comes from XML that have various tags that are hard to reflect
in a concise JSON format. Tags and list are kind of non "natural" to XML documents
leading this library to make some choices regarding data. "Journal" info was dropped
altogether as it would have led to many fields being empty all the time.
The hierarchy is also a bit unnatural but the choice was made to keep as close as
possible to the original data for future releases that may change schema from NLM's side.
Author has been kept and contains either "ForeName", "LastName", "Initials", or "CollectiveName".
(All the fields will be present all the time, but only some will be filled)
### Data Instances
```json
{
"MedlineCitation": {
"PMID": 0,
"DateCompleted": {"Year": 0, "Month": 0, "Day": 0},
"NumberOfReferences": 0,
"DateRevised": {"Year": 0, "Month": 0, "Day": 0},
"Article": {
"Abstract": {"AbstractText": "Some abstract (can be missing)" },
"ArticleTitle": "Article title",
"AuthorList": {"Author": [
{"FirstName": "John", "ForeName": "Doe", "Initials": "JD", "CollectiveName": ""}
{"CollectiveName": "The Manhattan Project", "FirstName": "", "ForeName": "", "Initials": ""}
]},
"Language": "en",
"GrantList": {
"Grant": [],
},
"PublicationTypeList": {"PublicationType": []},
},
"MedlineJournalInfo": {"Country": "France"},
"ChemicalList": {"Chemical": [{
"RegistryNumber": "XX",
"NameOfSubstance": "Methanol"
}]},
"CitationSubset": "AIM",
"MeshHeadingList": {
"MeshHeading": [],
},
},
"PubmedData": {
"ArticleIdList": {"ArticleId": "10.1002/bjs.1800650203"},
"PublicationStatus": "ppublish",
"History": {"PubMedPubDate": [{"Year": 0, "Month": 0, "Day": 0}]},
"ReferenceList": [{"Citation": "Somejournal", "CitationId": 01}],
},
}
```
### Data Fields
Main Fields will probably interest people are:
- "MedlineCitation" > "Article" > "AuthorList" > "Author"
- "MedlineCitation" > "Article" > "Abstract" > "AbstractText"
- "MedlineCitation" > "Article" > "Article Title"
- "MedlineCitation" > "ChemicalList" > "Chemical"
- "MedlineCitation" > "NumberOfReferences"
### Data Splits
There are no splits in this dataset. It is given as is.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html]()
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[https://www.nlm.nih.gov/databases/download/terms_and_conditions.html]()
### Citation Information
[Courtesy of the U.S. National Library of Medicine](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html).
### Contributions
Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset. | justinphan3110/vi_pubmed | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:text-classification",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"task_ids:text-scoring",
"task_ids:topic-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:en",
"license:other",
"region:us"
] | 2022-11-06T01:39:06+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["text-generation", "fill-mask", "text-classification"], "task_ids": ["language-modeling", "masked-language-modeling", "text-scoring", "topic-classification"], "paperswithcode_id": "pubmed", "pretty_name": "ViPubMed", "split": ["en", "vi"]} | 2022-11-06T21:02:17+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-text-scoring #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-other #region-us
| # Dataset Card for PubMed
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: : [URL
- Documentation: : [URL
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
### Supported Tasks and Leaderboards
### Languages
- English
## Dataset Structure
Bear in mind the data comes from XML that have various tags that are hard to reflect
in a concise JSON format. Tags and list are kind of non "natural" to XML documents
leading this library to make some choices regarding data. "Journal" info was dropped
altogether as it would have led to many fields being empty all the time.
The hierarchy is also a bit unnatural but the choice was made to keep as close as
possible to the original data for future releases that may change schema from NLM's side.
Author has been kept and contains either "ForeName", "LastName", "Initials", or "CollectiveName".
(All the fields will be present all the time, but only some will be filled)
### Data Instances
### Data Fields
Main Fields will probably interest people are:
- "MedlineCitation" > "Article" > "AuthorList" > "Author"
- "MedlineCitation" > "Article" > "Abstract" > "AbstractText"
- "MedlineCitation" > "Article" > "Article Title"
- "MedlineCitation" > "ChemicalList" > "Chemical"
- "MedlineCitation" > "NumberOfReferences"
### Data Splits
There are no splits in this dataset. It is given as is.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
[URL
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
[URL
Courtesy of the U.S. National Library of Medicine.
### Contributions
Thanks to @Narsil for adding this dataset. | [
"# Dataset Card for PubMed",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: : [URL\n- Documentation: : [URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\nNLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.",
"### Supported Tasks and Leaderboards",
"### Languages\n- English",
"## Dataset Structure\nBear in mind the data comes from XML that have various tags that are hard to reflect\nin a concise JSON format. Tags and list are kind of non \"natural\" to XML documents\nleading this library to make some choices regarding data. \"Journal\" info was dropped\naltogether as it would have led to many fields being empty all the time.\nThe hierarchy is also a bit unnatural but the choice was made to keep as close as\npossible to the original data for future releases that may change schema from NLM's side.\nAuthor has been kept and contains either \"ForeName\", \"LastName\", \"Initials\", or \"CollectiveName\".\n(All the fields will be present all the time, but only some will be filled)",
"### Data Instances",
"### Data Fields\nMain Fields will probably interest people are:\n- \"MedlineCitation\" > \"Article\" > \"AuthorList\" > \"Author\"\n- \"MedlineCitation\" > \"Article\" > \"Abstract\" > \"AbstractText\"\n- \"MedlineCitation\" > \"Article\" > \"Article Title\"\n- \"MedlineCitation\" > \"ChemicalList\" > \"Chemical\"\n- \"MedlineCitation\" > \"NumberOfReferences\"",
"### Data Splits\nThere are no splits in this dataset. It is given as is.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n[URL",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n[URL\n\nCourtesy of the U.S. National Library of Medicine.",
"### Contributions\nThanks to @Narsil for adding this dataset."
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_categories-text-classification #task_ids-language-modeling #task_ids-masked-language-modeling #task_ids-text-scoring #task_ids-topic-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-other #region-us \n",
"# Dataset Card for PubMed",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: : [URL\n- Documentation: : [URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\nNLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.",
"### Supported Tasks and Leaderboards",
"### Languages\n- English",
"## Dataset Structure\nBear in mind the data comes from XML that have various tags that are hard to reflect\nin a concise JSON format. Tags and list are kind of non \"natural\" to XML documents\nleading this library to make some choices regarding data. \"Journal\" info was dropped\naltogether as it would have led to many fields being empty all the time.\nThe hierarchy is also a bit unnatural but the choice was made to keep as close as\npossible to the original data for future releases that may change schema from NLM's side.\nAuthor has been kept and contains either \"ForeName\", \"LastName\", \"Initials\", or \"CollectiveName\".\n(All the fields will be present all the time, but only some will be filled)",
"### Data Instances",
"### Data Fields\nMain Fields will probably interest people are:\n- \"MedlineCitation\" > \"Article\" > \"AuthorList\" > \"Author\"\n- \"MedlineCitation\" > \"Article\" > \"Abstract\" > \"AbstractText\"\n- \"MedlineCitation\" > \"Article\" > \"Article Title\"\n- \"MedlineCitation\" > \"ChemicalList\" > \"Chemical\"\n- \"MedlineCitation\" > \"NumberOfReferences\"",
"### Data Splits\nThere are no splits in this dataset. It is given as is.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n[URL",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n[URL\n\nCourtesy of the U.S. National Library of Medicine.",
"### Contributions\nThanks to @Narsil for adding this dataset."
] |
ed7cc1bbeea46791a75ece509a414c12fd264167 |
# Hashtag Prediction Dataset from paper TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
[](https://huggingface.co/datasets/Twitter/HashtagPrediction/discussions) [](https://arxiv.org/abs/2209.07562) [](https://github.com/xinyangz/TwHIN-BERT)
This repo contains the Hashtag prediction dataset from our paper [TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations](https://arxiv.org/abs/2209.07562). <br />
[[arXiv]](https://arxiv.org/abs/2209.07562)
[[HuggingFace Models]](https://huggingface.co/Twitter/twhin-bert-base)
[[Github repo]](https://github.com/xinyangz/TwHIN-BERT)
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
## Download
Use the `hashtag-classification-id.zip` in this repo. [Link](https://huggingface.co/datasets/Twitter/HashtagPrediction/blob/main/hashtag-classification-id.zip).
Check the first-author's GitHub repo for any supplemental dataset material or code. [Link](https://github.com/xinyangz/TwHIN-BERT)
## Dataset Description
The hashtag prediction dataset is a multilingual classification dataset. Separate datasets are given for different languages. We first select 500 (or all available) popular hashtags of each language and then sample 10k (or all available) popular Tweets that contain these hashtags. We make sure each Tweet will have exactly one of the selected hashtags.
The evaluation task is a multiclass classification task, with hashtags as labels. We remove the hashtag from the Tweet, and let the model predict the removed hashtag.
We provide Tweet ID and raw text hashtag labels in `tsv` files. For each language, we provide train, development, and test splits.
To use the dataset, you must hydrate the Tweet text with [Twitter API](https://developer.twitter.com/en/docs/twitter-api), and **remove the hashtag used for label from each Tweet** .
The data format is displayed below.
| ID | label |
| ------------- | ------------- |
| 1 | hashtag |
| 2 | another hashtag |
## Citation
If you use our dataset in your work, please cite the following:
```bib
@article{zhang2022twhin,
title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations},
author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed},
journal={arXiv preprint arXiv:2209.07562},
year={2022}
}
``` | Twitter/HashtagPrediction | [
"language:sl",
"language:ur",
"language:sd",
"language:pl",
"language:vi",
"language:sv",
"language:am",
"language:da",
"language:mr",
"language:no",
"language:gu",
"language:in",
"language:ja",
"language:el",
"language:lv",
"language:it",
"language:ca",
"language:is",
"language:cs",
"language:te",
"language:tl",
"language:ro",
"language:ckb",
"language:pt",
"language:ps",
"language:zh",
"language:sr",
"language:pa",
"language:si",
"language:ml",
"language:ht",
"language:kn",
"language:ar",
"language:hu",
"language:nl",
"language:bg",
"language:bn",
"language:ne",
"language:hi",
"language:de",
"language:ko",
"language:fi",
"language:fr",
"language:es",
"language:et",
"language:en",
"language:fa",
"language:lt",
"language:or",
"language:cy",
"language:eu",
"language:iw",
"language:ta",
"language:th",
"language:tr",
"license:cc-by-4.0",
"Twitter",
"Multilingual",
"Classification",
"Benchmark",
"arxiv:2209.07562",
"region:us"
] | 2022-11-06T02:52:17+00:00 | {"language": ["sl", "ur", "sd", "pl", "vi", "sv", "am", "da", "mr", false, "gu", "in", "ja", "el", "lv", "it", "ca", "is", "cs", "te", "tl", "ro", "ckb", "pt", "ps", "zh", "sr", "pa", "si", "ml", "ht", "kn", "ar", "hu", "nl", "bg", "bn", "ne", "hi", "de", "ko", "fi", "fr", "es", "et", "en", "fa", "lt", "or", "cy", "eu", "iw", "ta", "th", "tr"], "license": "cc-by-4.0", "tags": ["Twitter", "Multilingual", "Classification", "Benchmark"]} | 2022-11-21T21:22:07+00:00 | [
"2209.07562"
] | [
"sl",
"ur",
"sd",
"pl",
"vi",
"sv",
"am",
"da",
"mr",
"no",
"gu",
"in",
"ja",
"el",
"lv",
"it",
"ca",
"is",
"cs",
"te",
"tl",
"ro",
"ckb",
"pt",
"ps",
"zh",
"sr",
"pa",
"si",
"ml",
"ht",
"kn",
"ar",
"hu",
"nl",
"bg",
"bn",
"ne",
"hi",
"de",
"ko",
"fi",
"fr",
"es",
"et",
"en",
"fa",
"lt",
"or",
"cy",
"eu",
"iw",
"ta",
"th",
"tr"
] | TAGS
#language-Slovenian #language-Urdu #language-Sindhi #language-Polish #language-Vietnamese #language-Swedish #language-Amharic #language-Danish #language-Marathi #language-Norwegian #language-Gujarati #language-in #language-Japanese #language-Modern Greek (1453-) #language-Latvian #language-Italian #language-Catalan #language-Icelandic #language-Czech #language-Telugu #language-Tagalog #language-Romanian #language-Central Kurdish #language-Portuguese #language-Pushto #language-Chinese #language-Serbian #language-Panjabi #language-Sinhala #language-Malayalam #language-Haitian #language-Kannada #language-Arabic #language-Hungarian #language-Dutch #language-Bulgarian #language-Bengali #language-Nepali (macrolanguage) #language-Hindi #language-German #language-Korean #language-Finnish #language-French #language-Spanish #language-Estonian #language-English #language-Persian #language-Lithuanian #language-Oriya (macrolanguage) #language-Welsh #language-Basque #language-iw #language-Tamil #language-Thai #language-Turkish #license-cc-by-4.0 #Twitter #Multilingual #Classification #Benchmark #arxiv-2209.07562 #region-us
| Hashtag Prediction Dataset from paper TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
=======================================================================================================================================
 popular hashtags of each language and then sample 10k (or all available) popular Tweets that contain these hashtags. We make sure each Tweet will have exactly one of the selected hashtags.
The evaluation task is a multiclass classification task, with hashtags as labels. We remove the hashtag from the Tweet, and let the model predict the removed hashtag.
We provide Tweet ID and raw text hashtag labels in 'tsv' files. For each language, we provide train, development, and test splits.
To use the dataset, you must hydrate the Tweet text with Twitter API, and remove the hashtag used for label from each Tweet .
The data format is displayed below.
If you use our dataset in your work, please cite the following:
| [] | [
"TAGS\n#language-Slovenian #language-Urdu #language-Sindhi #language-Polish #language-Vietnamese #language-Swedish #language-Amharic #language-Danish #language-Marathi #language-Norwegian #language-Gujarati #language-in #language-Japanese #language-Modern Greek (1453-) #language-Latvian #language-Italian #language-Catalan #language-Icelandic #language-Czech #language-Telugu #language-Tagalog #language-Romanian #language-Central Kurdish #language-Portuguese #language-Pushto #language-Chinese #language-Serbian #language-Panjabi #language-Sinhala #language-Malayalam #language-Haitian #language-Kannada #language-Arabic #language-Hungarian #language-Dutch #language-Bulgarian #language-Bengali #language-Nepali (macrolanguage) #language-Hindi #language-German #language-Korean #language-Finnish #language-French #language-Spanish #language-Estonian #language-English #language-Persian #language-Lithuanian #language-Oriya (macrolanguage) #language-Welsh #language-Basque #language-iw #language-Tamil #language-Thai #language-Turkish #license-cc-by-4.0 #Twitter #Multilingual #Classification #Benchmark #arxiv-2209.07562 #region-us \n"
] |
5b97d8f7a59d0414050b58e9cdb2c48fc78ec1a1 |
# Dataset Card for Machine Paraphrase Dataset (MPC)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/jpwahle/iconf22-paraphrase
- **Paper:** https://link.springer.com/chapter/10.1007/978-3-030-96957-8_34
- **Total size:** 533 MB
- **Train size:** 340 MB
- **Test size:** 193 MB
### Dataset Summary
The Machine Paraphrase Corpus (MPC) consists of ~200k examples of original, and paraphrases using two online paraphrasing tools.
It uses two paraphrasing tools (SpinnerChief, SpinBot) on three source texts (Wikipedia, arXiv, student theses).
The examples are **not** aligned, i.e., we sample different paragraphs for originals and paraphrased versions.
### How to use it
You can load the dataset using the `load_dataset` function:
```python
from datasets import load_dataset
ds = load_dataset("jpwahle/machine-paraphrase-dataset")
print(ds[0])
#OUTPUT:
{
'text': 'The commemoration was revealed on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in participation At the divulging function Lord Fortescue gave a discourse in which he evaluated that 11600 people from Devon had been slaughtered while serving in the war He later expressed that somewhere in the range of 63700 8000 regulars 36700 volunteers and 19000 recruits had served in the military The names of the fallen were recorded on a move of respect of which three duplicates were made one for Exeter Cathedral one to be held by the district chamber and one which the Prince of Wales put in an empty in the base of the war dedication The rulers visit created impressive energy in the zone A large number of individuals lined the road to welcome his motorcade and shops on the High Street hung out pennants with inviting messages After the uncovering Edward went through ten days visiting the neighborhood ',
'label': 1,
'dataset': 'wikipedia',
'method': 'spinbot'
}
```
### Supported Tasks and Leaderboards
Paraphrase Identification
### Languages
English
## Dataset Structure
### Data Instances
```json
{
'text': 'The commemoration was revealed on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in participation At the divulging function Lord Fortescue gave a discourse in which he evaluated that 11600 people from Devon had been slaughtered while serving in the war He later expressed that somewhere in the range of 63700 8000 regulars 36700 volunteers and 19000 recruits had served in the military The names of the fallen were recorded on a move of respect of which three duplicates were made one for Exeter Cathedral one to be held by the district chamber and one which the Prince of Wales put in an empty in the base of the war dedication The rulers visit created impressive energy in the zone A large number of individuals lined the road to welcome his motorcade and shops on the High Street hung out pennants with inviting messages After the uncovering Edward went through ten days visiting the neighborhood ',
'label': 1,
'dataset': 'wikipedia',
'method': 'spinbot'
}
```
### Data Fields
| Feature | Description |
| --- | --- |
| `text` | The unique identifier of the paper. |
| `label` | Whether it is a paraphrase (1) or the original (0). |
| `dataset` | The source dataset (Wikipedia, arXiv, or theses). |
| `method` | The method used (SpinBot, SpinnerChief, original). |
### Data Splits
- train (Wikipedia x Spinbot)
- test ([Wikipedia, arXiv, theses] x [SpinBot, SpinnerChief])
## Dataset Creation
### Curation Rationale
Providing a resource for testing against machine-paraprhased plagiarism.
### Source Data
#### Initial Data Collection and Normalization
- Paragraphs from `featured articles` from the English Wikipedia dump
- Paragraphs from full-text pdfs of arXMLiv
- Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Philip Wahle](https://jpwahle.com/)
### Licensing Information
The Machine Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.
### Citation Information
```bib
@inproceedings{10.1007/978-3-030-96957-8_34,
title = {Identifying Machine-Paraphrased Plagiarism},
author = {Wahle, Jan Philip and Ruas, Terry and Folt{\'y}nek, Tom{\'a}{\v{s}} and Meuschke, Norman and Gipp, Bela},
year = 2022,
booktitle = {Information for a Better World: Shaping the Global Future},
publisher = {Springer International Publishing},
address = {Cham},
pages = {393--413},
isbn = {978-3-030-96957-8},
editor = {Smits, Malte},
abstract = {Employing paraphrasing tools to conceal plagiarized text is a severe threat to academic integrity. To enable the detection of machine-paraphrased text, we evaluate the effectiveness of five pre-trained word embedding models combined with machine learning classifiers and state-of-the-art neural language models. We analyze preprints of research papers, graduation theses, and Wikipedia articles, which we paraphrased using different configurations of the tools SpinBot and SpinnerChief. The best performing technique, Longformer, achieved an average F1 score of 80.99{\%} (F1 = 99.68{\%} for SpinBot and F1 = 71.64{\%} for SpinnerChief cases), while human evaluators achieved F1 = 78.4{\%} for SpinBot and F1 = 65.6{\%} for SpinnerChief cases. We show that the automated classification alleviates shortcomings of widely-used text-matching systems, such as Turnitin and PlagScan.}
}
```
### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset. | jpwahle/machine-paraphrase-dataset | [
"task_categories:text-classification",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"spinbot",
"spinnerchief",
"plagiarism",
"paraphrase",
"academic integrity",
"arxiv",
"wikipedia",
"theses",
"region:us"
] | 2022-11-06T08:21:07+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-generation"], "task_ids": [], "paperswithcode_id": "identifying-machine-paraphrased-plagiarism", "pretty_name": "Machine Paraphrase Dataset (SpinnerChief/SpinBot)", "tags": ["spinbot", "spinnerchief", "plagiarism", "paraphrase", "academic integrity", "arxiv", "wikipedia", "theses"], "dataset_info": [{"split": "train", "download_size": 393224, "dataset_size": 393224}, {"split": "test", "download_size": 655376, "dataset_size": 655376}]} | 2022-11-18T16:54:17+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-text-generation #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #spinbot #spinnerchief #plagiarism #paraphrase #academic integrity #arxiv #wikipedia #theses #region-us
| Dataset Card for Machine Paraphrase Dataset (MPC)
=================================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Repository: URL
* Paper: URL
* Total size: 533 MB
* Train size: 340 MB
* Test size: 193 MB
### Dataset Summary
The Machine Paraphrase Corpus (MPC) consists of ~200k examples of original, and paraphrases using two online paraphrasing tools.
It uses two paraphrasing tools (SpinnerChief, SpinBot) on three source texts (Wikipedia, arXiv, student theses).
The examples are not aligned, i.e., we sample different paragraphs for originals and paraphrased versions.
### How to use it
You can load the dataset using the 'load\_dataset' function:
### Supported Tasks and Leaderboards
Paraphrase Identification
### Languages
English
Dataset Structure
-----------------
### Data Instances
### Data Fields
### Data Splits
* train (Wikipedia x Spinbot)
* test ([Wikipedia, arXiv, theses] x [SpinBot, SpinnerChief])
Dataset Creation
----------------
### Curation Rationale
Providing a resource for testing against machine-paraprhased plagiarism.
### Source Data
#### Initial Data Collection and Normalization
* Paragraphs from 'featured articles' from the English Wikipedia dump
* Paragraphs from full-text pdfs of arXMLiv
* Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Jan Philip Wahle
### Licensing Information
The Machine Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.
### Contributions
Thanks to @jpwahle for adding this dataset.
| [
"### Dataset Summary\n\n\nThe Machine Paraphrase Corpus (MPC) consists of ~200k examples of original, and paraphrases using two online paraphrasing tools.\nIt uses two paraphrasing tools (SpinnerChief, SpinBot) on three source texts (Wikipedia, arXiv, student theses).\nThe examples are not aligned, i.e., we sample different paragraphs for originals and paraphrased versions.",
"### How to use it\n\n\nYou can load the dataset using the 'load\\_dataset' function:",
"### Supported Tasks and Leaderboards\n\n\nParaphrase Identification",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\n* train (Wikipedia x Spinbot)\n* test ([Wikipedia, arXiv, theses] x [SpinBot, SpinnerChief])\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nProviding a resource for testing against machine-paraprhased plagiarism.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n* Paragraphs from 'featured articles' from the English Wikipedia dump\n* Paragraphs from full-text pdfs of arXMLiv\n* Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nJan Philip Wahle",
"### Licensing Information\n\n\nThe Machine Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.",
"### Contributions\n\n\nThanks to @jpwahle for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_categories-text-generation #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #spinbot #spinnerchief #plagiarism #paraphrase #academic integrity #arxiv #wikipedia #theses #region-us \n",
"### Dataset Summary\n\n\nThe Machine Paraphrase Corpus (MPC) consists of ~200k examples of original, and paraphrases using two online paraphrasing tools.\nIt uses two paraphrasing tools (SpinnerChief, SpinBot) on three source texts (Wikipedia, arXiv, student theses).\nThe examples are not aligned, i.e., we sample different paragraphs for originals and paraphrased versions.",
"### How to use it\n\n\nYou can load the dataset using the 'load\\_dataset' function:",
"### Supported Tasks and Leaderboards\n\n\nParaphrase Identification",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\n* train (Wikipedia x Spinbot)\n* test ([Wikipedia, arXiv, theses] x [SpinBot, SpinnerChief])\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nProviding a resource for testing against machine-paraprhased plagiarism.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n* Paragraphs from 'featured articles' from the English Wikipedia dump\n* Paragraphs from full-text pdfs of arXMLiv\n* Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nJan Philip Wahle",
"### Licensing Information\n\n\nThe Machine Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.",
"### Contributions\n\n\nThanks to @jpwahle for adding this dataset."
] |
d3df6ced7063b572ef46aafd62bcbe953d196491 |
# Dataset Card for Machine Paraphrase Dataset (MPC)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rat1.ionale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Paper:** https://ieeexplore.ieee.org/document/9651895
- **Total size:** 2.23 GB
- **Train size:** 1.52 GB
- **Test size:** 861 MB
### Dataset Summary
The Autoencoder Paraphrase Corpus (APC) consists of ~200k examples of original, and paraphrases using three neural language models.
It uses three models (BERT, RoBERTa, Longformer) on three source texts (Wikipedia, arXiv, student theses).
The examples are aligned, i.e., we sample the same paragraphs for originals and paraphrased versions.
### How to use it
You can load the dataset using the `load_dataset` function:
```python
from datasets import load_dataset
ds = load_dataset("jpwahle/autoencoder-paraphrase-dataset")
print(ds[0])
#OUTPUT:
{
'text': 'War memorial formally unveiled on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in attendance At the unveiling ceremony Captain Fortescue gave a speech during wherein he announced that 11 600 men and women from Devon had been inval while serving in imperialist war He later stated that some 63 700 8 000 regulars 36 700 volunteers 19 000 conscripts had served in the armed forces The heroism of the dead are recorded on a roll of honour of which three copies were made one for Exeter Cathedral one To be held by Tasman county council and another honoring the Prince of Wales placed in a hollow in bedrock base of the war memorial The princes visit generated considerable excitement in the area Thousands of spectators lined the street to greet his motorcade and shops on Market High Street hung out banners with welcoming messages After the unveiling Edward spent ten days touring the local area',
'label': 1,
'dataset': 'wikipedia',
'method': 'longformer'
}
```
### Supported Tasks and Leaderboards
Paraphrase Identification
### Languages
English
## Dataset Structure
### Data Instances
```json
{
'text': 'War memorial formally unveiled on Whit Monday 16 May 1921 by the Prince of Wales later King Edward VIII with Lutyens in attendance At the unveiling ceremony Captain Fortescue gave a speech during wherein he announced that 11 600 men and women from Devon had been inval while serving in imperialist war He later stated that some 63 700 8 000 regulars 36 700 volunteers 19 000 conscripts had served in the armed forces The heroism of the dead are recorded on a roll of honour of which three copies were made one for Exeter Cathedral one To be held by Tasman county council and another honoring the Prince of Wales placed in a hollow in bedrock base of the war memorial The princes visit generated considerable excitement in the area Thousands of spectators lined the street to greet his motorcade and shops on Market High Street hung out banners with welcoming messages After the unveiling Edward spent ten days touring the local area',
'label': 1,
'dataset': 'wikipedia',
'method': 'longformer'
}
```
### Data Fields
| Feature | Description |
| --- | --- |
| `text` | The unique identifier of the paper. |
| `label` | Whether it is a paraphrase (1) or the original (0). |
| `dataset` | The source dataset (Wikipedia, arXiv, or theses). |
| `method` | The method used (bert, roberta, longformer). |
### Data Splits
- train (Wikipedia x [bert, roberta, longformer])
- test ([Wikipedia, arXiv, theses] x [bert, roberta, longformer])
## Dataset Creation
### Curation Rationale
Providing a resource for testing against autoencoder paraprhased plagiarism.
### Source Data
#### Initial Data Collection and Normalization
- Paragraphs from `featured articles` from the English Wikipedia dump
- Paragraphs from full-text pdfs of arXMLiv
- Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[Jan Philip Wahle](https://jpwahle.com/)
### Licensing Information
The Autoencoder Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.
### Citation Information
```bib
@inproceedings{9651895,
title = {Are Neural Language Models Good Plagiarists? A Benchmark for Neural Paraphrase Detection},
author = {Wahle, Jan Philip and Ruas, Terry and Meuschke, Norman and Gipp, Bela},
year = 2021,
booktitle = {2021 ACM/IEEE Joint Conference on Digital Libraries (JCDL)},
volume = {},
number = {},
pages = {226--229},
doi = {10.1109/JCDL52503.2021.00065}
}
```
### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset. | jpwahle/autoencoder-paraphrase-dataset | [
"task_categories:text-classification",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"bert",
"roberta",
"longformer",
"plagiarism",
"paraphrase",
"academic integrity",
"arxiv",
"wikipedia",
"theses",
"region:us"
] | 2022-11-06T08:28:10+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-generation"], "task_ids": [], "paperswithcode_id": "are-neural-language-models-good-plagiarists-a", "pretty_name": "Autoencoder Paraphrase Dataset (BERT, RoBERTa, Longformer)", "tags": ["bert", "roberta", "longformer", "plagiarism", "paraphrase", "academic integrity", "arxiv", "wikipedia", "theses"], "dataset_info": [{"split": "train", "download_size": 2980464, "dataset_size": 2980464}, {"split": "test", "download_size": 1690032, "dataset_size": 1690032}]} | 2022-11-18T17:26:00+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-text-generation #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #bert #roberta #longformer #plagiarism #paraphrase #academic integrity #arxiv #wikipedia #theses #region-us
| Dataset Card for Machine Paraphrase Dataset (MPC)
=================================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation URL
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Paper: URL
* Total size: 2.23 GB
* Train size: 1.52 GB
* Test size: 861 MB
### Dataset Summary
The Autoencoder Paraphrase Corpus (APC) consists of ~200k examples of original, and paraphrases using three neural language models.
It uses three models (BERT, RoBERTa, Longformer) on three source texts (Wikipedia, arXiv, student theses).
The examples are aligned, i.e., we sample the same paragraphs for originals and paraphrased versions.
### How to use it
You can load the dataset using the 'load\_dataset' function:
### Supported Tasks and Leaderboards
Paraphrase Identification
### Languages
English
Dataset Structure
-----------------
### Data Instances
### Data Fields
### Data Splits
* train (Wikipedia x [bert, roberta, longformer])
* test ([Wikipedia, arXiv, theses] x [bert, roberta, longformer])
Dataset Creation
----------------
### Curation Rationale
Providing a resource for testing against autoencoder paraprhased plagiarism.
### Source Data
#### Initial Data Collection and Normalization
* Paragraphs from 'featured articles' from the English Wikipedia dump
* Paragraphs from full-text pdfs of arXMLiv
* Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
Jan Philip Wahle
### Licensing Information
The Autoencoder Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.
### Contributions
Thanks to @jpwahle for adding this dataset.
| [
"### Dataset Summary\n\n\nThe Autoencoder Paraphrase Corpus (APC) consists of ~200k examples of original, and paraphrases using three neural language models.\nIt uses three models (BERT, RoBERTa, Longformer) on three source texts (Wikipedia, arXiv, student theses).\nThe examples are aligned, i.e., we sample the same paragraphs for originals and paraphrased versions.",
"### How to use it\n\n\nYou can load the dataset using the 'load\\_dataset' function:",
"### Supported Tasks and Leaderboards\n\n\nParaphrase Identification",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\n* train (Wikipedia x [bert, roberta, longformer])\n* test ([Wikipedia, arXiv, theses] x [bert, roberta, longformer])\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nProviding a resource for testing against autoencoder paraprhased plagiarism.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n* Paragraphs from 'featured articles' from the English Wikipedia dump\n* Paragraphs from full-text pdfs of arXMLiv\n* Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nJan Philip Wahle",
"### Licensing Information\n\n\nThe Autoencoder Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.",
"### Contributions\n\n\nThanks to @jpwahle for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_categories-text-generation #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #bert #roberta #longformer #plagiarism #paraphrase #academic integrity #arxiv #wikipedia #theses #region-us \n",
"### Dataset Summary\n\n\nThe Autoencoder Paraphrase Corpus (APC) consists of ~200k examples of original, and paraphrases using three neural language models.\nIt uses three models (BERT, RoBERTa, Longformer) on three source texts (Wikipedia, arXiv, student theses).\nThe examples are aligned, i.e., we sample the same paragraphs for originals and paraphrased versions.",
"### How to use it\n\n\nYou can load the dataset using the 'load\\_dataset' function:",
"### Supported Tasks and Leaderboards\n\n\nParaphrase Identification",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\n* train (Wikipedia x [bert, roberta, longformer])\n* test ([Wikipedia, arXiv, theses] x [bert, roberta, longformer])\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nProviding a resource for testing against autoencoder paraprhased plagiarism.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\n* Paragraphs from 'featured articles' from the English Wikipedia dump\n* Paragraphs from full-text pdfs of arXMLiv\n* Paragraphs from full-text pdfs of Czech student thesis (bachelor, master, PhD).",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nJan Philip Wahle",
"### Licensing Information\n\n\nThe Autoencoder Paraphrase Dataset is released under CC BY-NC 4.0. By using this corpus, you agree to its usage terms.",
"### Contributions\n\n\nThanks to @jpwahle for adding this dataset."
] |
2ba342d0d668e896b3a805691ab3bcba5f8cc9d3 |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Size:** 163MB
- **Repository:** https://github.com/jpwahle/emnlp22-transforming
- **Paper:** https://arxiv.org/abs/2210.03568
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | jpwahle/autoregressive-paraphrase-dataset | [
"task_categories:text-classification",
"task_categories:text-generation",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"plagiarism",
"paraphrase",
"academic integrity",
"arxiv",
"wikipedia",
"theses",
"bert",
"roberta",
"t5",
"gpt-3",
"arxiv:2210.03568",
"region:us"
] | 2022-11-06T08:28:27+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification", "text-generation"], "task_ids": [], "pretty_name": "Machine Paraphrase Dataset (T5, GPT-3)", "tags": ["plagiarism", "paraphrase", "academic integrity", "arxiv", "wikipedia", "theses", "bert", "roberta", "t5", "gpt-3"]} | 2022-11-19T12:14:43+00:00 | [
"2210.03568"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-text-generation #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #plagiarism #paraphrase #academic integrity #arxiv #wikipedia #theses #bert #roberta #t5 #gpt-3 #arxiv-2210.03568 #region-us
|
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Size: 163MB
- Repository: URL
- Paper: URL
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Size: 163MB\n- Repository: URL\n- Paper: URL",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_categories-text-generation #annotations_creators-machine-generated #language_creators-machine-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #plagiarism #paraphrase #academic integrity #arxiv #wikipedia #theses #bert #roberta #t5 #gpt-3 #arxiv-2210.03568 #region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Size: 163MB\n- Repository: URL\n- Paper: URL",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
8712f2e0b993eefe0b12f604d726048951b2fe46 | # Dataset Card for DBLP Discovery Dataset (D3)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/jpwahle/lrec22-d3-dataset
- **Paper:** https://aclanthology.org/2022.lrec-1.283/
- **Total size:** 8.71 GB
### Dataset Summary
DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15% annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers’ abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
Total size: 8.71 GB
Papers size: 8.13 GB
Authors size: 0.58 GB
### Data Fields
#### Papers
| Feature | Description |
| --- | --- |
| `corpusid` | The unique identifier of the paper. |
| `externalids` | The same paper in other repositories (e.g., DOI, ACL). |
| `title` | The title of the paper. |
| `authors` | The authors of the paper with their `authorid` and `name`. |
| `venue` | The venue of the paper. |
| `year` | The year of the paper publication. |
| `publicationdate` | A more precise publication date of the paper. |
| `abstract` | The abstract of the paper. |
| `outgoingcitations` | The number of references of the paper. |
| `ingoingcitations` | The number of citations of the paper. |
| `isopenaccess` | Whether the paper is open access. |
| `influentialcitationcount` | The number of influential citations of the paper according to SemanticScholar. |
| `s2fieldsofstudy` | The fields of study of the paper according to SemanticScholar. |
| `publicationtypes` | The publication types of the paper. |
| `journal` | The journal of the paper. |
| `updated` | The last time the paper was updated. |
| `url` | A url to the paper in SemanticScholar. |
#### Authors
| Feature | Description |
| --- | --- |
| `authorid` | The unique identifier of the author. |
| `externalids` | The same author in other repositories (e.g., ACL, PubMed). This can include `ORCID` |
| `name` | The name of the author. |
| `affiliations` | The affiliations of the author. |
| `homepage` | The homepage of the author. |
| `papercount` | The number of papers the author has written. |
| `citationcount` | The number of citations the author has received. |
| `hindex` | The h-index of the author. |
| `updated` | The last time the author was updated. |
| `email` | The email of the author. |
| `s2url` | A url to the author in SemanticScholar. |
### Data Splits
- `papers`
- `authors`
## Dataset Creation
### Curation Rationale
Providing a resource to analyze the state of computer science research statistically and semantically.
### Source Data
#### Initial Data Collection and Normalization
DBLP and from v2.0 SemanticScholar
## Additional Information
### Dataset Curators
[Jan Philip Wahle](https://jpwahle.com/)
### Licensing Information
The DBLP Discovery Dataset is released under the CC BY-NC 4.0. By using this corpus, you are agreeing to its usage terms.
### Citation Information
If you use the dataset in any way, please cite:
```bib
@inproceedings{Wahle2022c,
title = {D3: A Massive Dataset of Scholarly Metadata for Analyzing the State of Computer Science Research},
author = {Wahle, Jan Philip and Ruas, Terry and Mohammad, Saif M. and Gipp, Bela},
year = {2022},
month = {July},
booktitle = {Proceedings of The 13th Language Resources and Evaluation Conference},
publisher = {European Language Resources Association},
address = {Marseille, France},
doi = {},
}
```
Also make sure to cite the following papers if you use SemanticScholar data:
```bib
@inproceedings{ammar-etal-2018-construction,
title = "Construction of the Literature Graph in Semantic Scholar",
author = "Ammar, Waleed and
Groeneveld, Dirk and
Bhagavatula, Chandra and
Beltagy, Iz",
booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers)",
month = jun,
year = "2018",
address = "New Orleans - Louisiana",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/N18-3011",
doi = "10.18653/v1/N18-3011",
pages = "84--91",
}
```
```bib
@inproceedings{lo-wang-2020-s2orc,
title = "{S}2{ORC}: The Semantic Scholar Open Research Corpus",
author = "Lo, Kyle and Wang, Lucy Lu and Neumann, Mark and Kinney, Rodney and Weld, Daniel",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.acl-main.447",
doi = "10.18653/v1/2020.acl-main.447",
pages = "4969--4983"
}
```### Contributions
Thanks to [@jpwahle](https://github.com/jpwahle) for adding this dataset.
| jpwahle/dblp-discovery-dataset | [
"task_categories:other",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended|s2orc",
"language:en",
"license:cc-by-4.0",
"dblp",
"s2",
"scientometrics",
"computer science",
"papers",
"arxiv",
"region:us"
] | 2022-11-06T09:42:13+00:00 | {"annotations_creators": ["found"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|s2orc"], "task_categories": ["other"], "task_ids": [], "paperswithcode_id": "d3", "pretty_name": "DBLP Discovery Dataset (D3)", "tags": ["dblp", "s2", "scientometrics", "computer science", "papers", "arxiv"], "dataset_info": [{"config_name": "papers", "download_size": 15876152, "dataset_size": 15876152}, {"config_name": "authors", "download_size": 1177888, "dataset_size": 1177888}]} | 2022-11-28T13:18:13+00:00 | [] | [
"en"
] | TAGS
#task_categories-other #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|s2orc #language-English #license-cc-by-4.0 #dblp #s2 #scientometrics #computer science #papers #arxiv #region-us
| Dataset Card for DBLP Discovery Dataset (D3)
============================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Repository: URL
* Paper: URL
* Total size: 8.71 GB
### Dataset Summary
DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15% annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers’ abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.
### Supported Tasks and Leaderboards
### Languages
English
Dataset Structure
-----------------
### Data Instances
Total size: 8.71 GB
Papers size: 8.13 GB
Authors size: 0.58 GB
### Data Fields
#### Papers
#### Authors
### Data Splits
* 'papers'
* 'authors'
Dataset Creation
----------------
### Curation Rationale
Providing a resource to analyze the state of computer science research statistically and semantically.
### Source Data
#### Initial Data Collection and Normalization
DBLP and from v2.0 SemanticScholar
Additional Information
----------------------
### Dataset Curators
Jan Philip Wahle
### Licensing Information
The DBLP Discovery Dataset is released under the CC BY-NC 4.0. By using this corpus, you are agreeing to its usage terms.
If you use the dataset in any way, please cite:
Also make sure to cite the following papers if you use SemanticScholar data:
### Contributions
Thanks to @jpwahle for adding this dataset.
| [
"### Dataset Summary\n\n\nDBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15% annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers’ abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nTotal size: 8.71 GB\nPapers size: 8.13 GB\nAuthors size: 0.58 GB",
"### Data Fields",
"#### Papers",
"#### Authors",
"### Data Splits\n\n\n* 'papers'\n* 'authors'\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nProviding a resource to analyze the state of computer science research statistically and semantically.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nDBLP and from v2.0 SemanticScholar\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nJan Philip Wahle",
"### Licensing Information\n\n\nThe DBLP Discovery Dataset is released under the CC BY-NC 4.0. By using this corpus, you are agreeing to its usage terms.\n\n\nIf you use the dataset in any way, please cite:\n\n\nAlso make sure to cite the following papers if you use SemanticScholar data:",
"### Contributions\n\n\nThanks to @jpwahle for adding this dataset."
] | [
"TAGS\n#task_categories-other #annotations_creators-found #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended|s2orc #language-English #license-cc-by-4.0 #dblp #s2 #scientometrics #computer science #papers #arxiv #region-us \n",
"### Dataset Summary\n\n\nDBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (15% annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers’ abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nTotal size: 8.71 GB\nPapers size: 8.13 GB\nAuthors size: 0.58 GB",
"### Data Fields",
"#### Papers",
"#### Authors",
"### Data Splits\n\n\n* 'papers'\n* 'authors'\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nProviding a resource to analyze the state of computer science research statistically and semantically.",
"### Source Data",
"#### Initial Data Collection and Normalization\n\n\nDBLP and from v2.0 SemanticScholar\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nJan Philip Wahle",
"### Licensing Information\n\n\nThe DBLP Discovery Dataset is released under the CC BY-NC 4.0. By using this corpus, you are agreeing to its usage terms.\n\n\nIf you use the dataset in any way, please cite:\n\n\nAlso make sure to cite the following papers if you use SemanticScholar data:",
"### Contributions\n\n\nThanks to @jpwahle for adding this dataset."
] |
809cbb33cc56feb36861453482737011984d2e72 | # Dataset Card for "amazon-reviews-input-output"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AlekseyKorshuk/amazon-reviews-input-output | [
"region:us"
] | 2022-11-06T10:49:53+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "output_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3105, "num_examples": 10}, {"name": "train", "num_bytes": 223383, "num_examples": 1000}, {"name": "validation", "num_bytes": 24145, "num_examples": 100}], "download_size": 160709, "dataset_size": 250633}} | 2022-11-06T10:54:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "amazon-reviews-input-output"
More Information needed | [
"# Dataset Card for \"amazon-reviews-input-output\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"amazon-reviews-input-output\"\n\nMore Information needed"
] |
7257fc7041564826ef9e11c7eb25e520c553a23a | # Dataset Card for "minguostyle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jojofan/minguostyle | [
"region:us"
] | 2022-11-06T11:00:25+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 444193006.0, "num_examples": 944}], "download_size": 444181518, "dataset_size": 444193006.0}} | 2022-11-20T10:23:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "minguostyle"
More Information needed | [
"# Dataset Card for \"minguostyle\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"minguostyle\"\n\nMore Information needed"
] |
3fe6546a4680db3b29a73ab9b6d8eeb955c7f3c3 | # Dataset Card for "simpsons-blip-captions"
| Norod78/simpsons-blip-captions | [
"task_categories:text-to-image",
"annotations_creators:machine-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-06T11:11:36+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["other"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "pretty_name": "Simpsons BLIP captions", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51605730.0, "num_examples": 755}], "download_size": 50553165, "dataset_size": 51605730.0}, "tags": []} | 2022-11-09T16:27:19+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-nc-sa-4.0 #region-us
| # Dataset Card for "simpsons-blip-captions"
| [
"# Dataset Card for \"simpsons-blip-captions\""
] | [
"TAGS\n#task_categories-text-to-image #annotations_creators-machine-generated #language_creators-other #multilinguality-monolingual #size_categories-n<1K #language-English #license-cc-by-nc-sa-4.0 #region-us \n",
"# Dataset Card for \"simpsons-blip-captions\""
] |
df2ff8dcc6a6444f74d735d16d12b50d9c25fbab | # Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sanagnos/processed_bert_dataset | [
"region:us"
] | 2022-11-06T12:54:51+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 24027415200.0, "num_examples": 6674282}], "download_size": 5731603526, "dataset_size": 24027415200.0}} | 2022-11-06T22:27:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "processed_bert_dataset"
More Information needed | [
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] |
9655fd7b4d3c9b841446e3687c720f766372ca4c | # Dataset Card for "vqgan1024_reconstruction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | maloyan/vqgan1024_reconstruction | [
"region:us"
] | 2022-11-06T13:36:33+00:00 | {"dataset_info": {"features": [{"name": "image_512", "dtype": "image"}, {"name": "image_256", "dtype": "image"}, {"name": "reconstruction_256", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3446042724.0, "num_examples": 100000}], "download_size": 4331449801, "dataset_size": 3446042724.0}} | 2022-11-06T13:40:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vqgan1024_reconstruction"
More Information needed | [
"# Dataset Card for \"vqgan1024_reconstruction\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vqgan1024_reconstruction\"\n\nMore Information needed"
] |
112a1953643ce80c81c9bdd37f751909cf10f4b6 | # AutoTrain Dataset for project: csi5386
## Dataset Description
This dataset has been automatically processed by AutoTrain for project csi5386.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"context": "Exhibit 10.1\n\nFORM OF SUB-RESELLER AGREEMENT\n\nSignature Page\n\nReseller Full Legal Name Salesforce.org, a nonprofit public benefit corporation having its principal place of business at 50 Fremont Street, Suite 300, San Francisco, California 94105\n\nThis Form of Sub-Reseller Agreement (this \"Sub-Reseller Agreement\") is made and entered in by and between salesforce.com, inc., a Delaware corporation having its principal place of business at The Landmark @ One Market, Suite 300, San Francisco, California 94105 (\"SFDC\" or \"Salesforce\") and the Reseller named above and amends that certain Reseller Agreement between Salesforce and Reseller dated as of August 1, 2015, as previously amended (the \"Agreement\"). This Sub-Reseller Agreement is effective as of the later of the dates beneath the Parties' signatures below (\"Sub-Reseller Effective Date\"), provided, however, that the dates of the Parties' signatures are not separated by a period of time greater than ten (10) business days. If such period is greater than ten (10) business days then this Sub-Reseller Agreement shall be deemed null and void and to be of no effect. Capitalized terms not defined herein shall have the meanings given to them in the Agreement.\n\nThe Parties, by their respective authorized signatories, have duly executed this Sub-Reseller Agreement as of the Sub-Reseller Effective Date.\n\nSalesforce.com, Inc. Reseller\n\nBy: By: Name: Name: Title: Title: Date: Date:\n\nSource: SALESFORCE.COM, INC., 10-Q, 11/22/2017\n\n\n\n\n\nExhibit 10.1\n\nSub-Reseller Agreement Terms & Conditions\n\n1. Resale Rights. SFDC hereby appoints SUB-RESELLER (\"Sub-Reseller\") as a sub-reseller to whom Reseller may resell Services in accordance with Section 2(ii) of the Agreement, provided that Sub-Reseller may only resell such Services to Customer. Reseller must ensure that Sub-Reseller complies with the terms of the Agreement applicable to Reseller as if Sub- Reseller were an original party to the Agreement and any breach by Sub-Reseller of the Agreement will be deemed a breach by Reseller. Sub-Reseller is not be a third-party beneficiary of the Agreement.\n\n2. Effect of Sub-Reseller Agreement. Subject to the above modifications, the Agreement remains in full force and effect.\n\n3. Entire Agreement. The terms and conditions herein contained constitute the entire agreement between the Parties with respect to the subject matter of this Sub-Reseller Agreement and supersede any previous and contemporaneous agreements and understandings, whether oral or written, between the Parties hereto with respect to the subject matter hereof.\n\n4. Counterparts. This Sub-Reseller Agreement may be executed in one or more counterparts, including facsimiles or scanned copies sent via email or otherwise, each of which will be deemed to be a duplicate original, but all of which, taken together, will be deemed to constitute a single instrument.\n\nSource: SALESFORCE.COM, INC., 10-Q, 11/22/2017",
"question": "Highlight the parts (if any) of this contract related to \"Non-Disparagement\" that should be reviewed by a lawyer. Details: Is there a requirement on a party not to disparage the counterparty?",
"answers.text": [
""
],
"answers.answer_start": [
-1
],
"feat_id": [
"SalesforcecomInc_20171122_10-Q_EX-10.1_10961535_EX-10.1_Reseller Agreement__Non-Disparagement_0"
],
"feat_title": [
"SalesforcecomInc_20171122_10-Q_EX-10.1_10961535_EX-10.1_Reseller Agreement"
]
},
{
"context": "EXHIBIT 10.2\n\n DISTRIBUTOR AGREEMENT\n\nEXHIBIT 10.2\n\n EXCLUSIVE DISTRIBUTOR AGREEMENT\n\n THIS EXCLUSIVE DISTRIBUTOR AGREEMENT (the \"Agreement\") shall be effective as of _Dec. 8, 2005 (hereinafter \"Effective Date\"), by and between LifeUSA/ Envision Health, Inc., a corporation (hereinafter collectively \"ENVISION\"), and Sierra Mountain Minerals, Inc., a Canadian company (hereinafter \"SIERRA\"), is made with reference to the following facts:\n\n Recitals\n\nA. SIERRA is the manufacture and producer of a joint health product called \"SierraSil\" (hereinafter \"the Product\") for human use.\n\nB. ENVISION is the manufacturer of certain nutritional supplements and is desirous of becoming an exclusive distributor for the Product in any blend with Krill Oil (hereinafter \"the Finished Product\") in all distribution channels in the Territory on the terms and conditions set forth herein.\n\nC. SIERRA is desirous of having ENVISION act as its exclusive distributor for the Product in any blend with Krill Oil in all distribution channels in the Territory on the terms and conditions set forth herein.\n\nNOW, THEREFORE, it is hereby agreed as follows:\n\n1. Incorporation of Recitals. The Recitals set forth in Paragraphs A through C, above, are incorporated herein as though set forth in full.\n\n2. Appointment. SIERRA hereby appoints ENVISION as its exclusive distributor for the Product in any blend with Krill Oil within the Territory subject to ENVISION fulfilling the terms and conditions of the best efforts marketing requirements set forth herein in Sections 4, 5, and 9. SIERRA shall cease making sales to any customer or distributor who, during the term of this Agreement, violates ENVISION's exclusivity.\n\n3. Territory. The Territory shall be the entire world.\n\n4. Prices and Terms. The price for the Product as set forth in Section 9 herein, sold by SIERRA to ENVISION, shall be subject to change due to changes in manufacturing costs and so as to maximize profits; any changes in price for the Product shall not be applicable to previously accepted orders and shall be made with at least ninety (90) days advance notice in writing and in good faith by conference of the parties. ENVISION shall not resell the Product alone. Terms of payment will be 1/3 upon placement of order and 2/3 balance net thirty (30) days or as mutually agreed upon in writing between the parties. Delivery will be F.O.B. ENVISION shall be responsible for all costs of shipping from SIERRA to ENVISION.\n\n5. Product Support. ENVISION will use its best efforts to market and sell the Finished Product throughout the Territory. The parties also agree that:\n\n o If SIERRA customers are interested in purchasing the Product in any blend with Krill Oil, SIERRA will refer them to ENVISION.\n\n o ENVISION will be responsible for all costs associated with developing and manufacturing the Finished Product.\n\n6. Sales Disclosures. ENVISION will provide SIERRA with demand projections for the Product and SIERRA will produce enough Product to meet such demand projections. ENVISION will inform SIERRA of committed sales and SIERRA will increase or scale up its production of the Product accordingly. SIERRA will not unreasonably withhold the Product, but shall not be liable for unfulfilled or partially fulfilled orders given just cause for such action.\n\n7. Term. The term of this Agreement shall be two (2) years from the Effective Date with automatic annual renewals thereafter provided either party does not provide sixty (60) days notice of termination prior to the renewal date or the Agreement is not otherwise terminated as set forth in Section 8.\n\n8. Termination. (a) Upon the occurrence of a material breach or default as to any obligation, term or provision contained herein by either party and the failure of the breaching party to promptly pursue (within thirty (30) days after receiving written notice thereof from the non-breaching party) a reasonable remedy designed to cure (in the reasonable judgment of the non-breaching party) such material breach or default, this Agreement may be terminated by the non-breaching party by giving written notice of termination to the breaching party, such termination\n\n\n\n\n\n being immediately effective upon the giving of such notice of termination.\n\n (b) Upon the occurrence of bankruptcy of the other party, breach of confidentiality, government legislative interference, or force majeure extending beyond sixty (60) days, either party may immediately terminate the Agreement.\n\n9. Purchase Requirements. During the term of this Agreement, ENVISION will exclusively purchase the Product from SIERRA. The parties mutually agree to the Purchase Price of:\n\n Product Purchase Price ----------------------------------------------- A. SierraSil Per Sierra Sil's wholesale price list.\n\n10. Intellectual Property. SIERRA is responsible for all Patent costs for the Product. SIERRA warrants it owns pending patents for the Product in the U.S. and internationally. SIERRA hereby grants ENVISION an exclusive, royalty-free sub-license of the Product's future patents, and patent applications to distribute, sell and market the Finished Product. SIERRA hereby agrees to indemnify, defend and hold ENVISION harmless from any claims that the Product infringes upon any other patent.\n\n11. Trademarks SIERRA is the owner of the trademark&sbsp; \"SierraSil\". This Agreement grants ENVISION a non-exclusive and non-royalty bearing license to use the mark \"SierraSil\". SIERRA shall at all times be the owner of the trademark and ENVISION shall acquire no rights thereto. Upon termination, ENVISION shall have eighteen (18) months to exhaust any inventories, packaging and advertising materials bearing the \"SierraSil\" trademark and SIERRA shall have first option to buy back any inventory at ENVISION's net purchase price.\n\n12. Independent Contractor Status. The parties acknowledge that ENVISION is an independent contractor and shall not be deemed to be an employee, agent, or joint venturer of SIERRA for any purpose, including federal tax purposes.\n\n13. Warranty. SIERRA warrants that the Product shall be free from defects in material and workmanship for the reasonable shelf life of the Product. In the event of any breach of this warranty or in the event any user of Product makes a claim that the Product was the cause of personal injury or property damage (product liability claim), SIERRA shall indemnify, defend and hold ENVISION harmless from any liability occasioned by a breach of warranty or a product liability claim. SIERRA warrants that it carries general liability insurance of not less than $2 million per occurrence and product liability insurance of not less than $5 million per occurrence and that, upon the execution of this Agreement, it will name ENVISION as an additional insured on such policies. SIERRA further warrants that the Product will not be adulterated or misbranded within the meaning of any federal, state, or local law or regulation or other applicable law. SIERRA agrees to promptly notify ENVISION of any problem, anomaly, defect or condition which would reasonably cause ENVISION's concern relative to stability, reliability, form, fit, function or quality of the Product.\n\n ENVISION warrants that the Finished Product will not be adulterated or misbranded within the meaning of any federal, state, or local law or regulation or other applicable law. In the event of any breach of this warranty or in the event any user of the Finished Product makes a claim that the Finished Product was the cause of personal injury or property damage (product liability claim), ENVISION shall indemnify, defend, and hold SIERRA harmless from any liability occasioned by a breach of warranty or a product liability claim. ENVISION warrants that it carries general liability insurance of $1 million per occurrence and product liability insurance of not less than $2 million per occurrence and that, upon execution of this Agreement, it will name SIERRA as an additional insured on such policies.\n\n14. Confidential Information. The parties acknowledge that, during the term of this Agreement, each may receive certain Proprietary Information of the other. Proprietary Information includes, without limitation, formula, scientific studies, processes, plans, formulations, technical information, new product information, methods of product delivery, test procedures, product samples, specifications, scientific, clinical, commercial and other information or data, customer lists, customer contacts, and other distributors within the Territory which are considered confidential in nature whether communicated in writing or orally. The parties agree that each will treat such information as confidential. Neither party shall have the right to disclose the Proprietary Information to any third party without the express written consent of the disclosing party. Neither party may use the proprietary information except in furtherance of the goals of this Agreement and is further prohibited from utilizing the Proprietary Information directly nor indirectly to engage in any business activity which is competitive with the other.\n\n15. Force Majeure. In no event shall any party be responsible for its failure to fulfill any of its obligations under this Agreement when such failure is due to fires, floods, riots, strikes, freight embargoes, acts of God or insurrection. In the event of a force majeure, the party affected thereby shall give immediate written notice to the other. If the event of force majeure continues for longer than\n\n\n\n\n\n sixty (60) days, the party not so affected shall have the right to terminate this Agreement.\n\n16. Non-Waiver of Default. The failure of either party at any time to require the performance by a party of any provision of this Agreement shall in no way affect the right to require performance at any time after such failure. The waiver of either party of a breach of any provision of this Agreement shall not be taken to be a waiver of any succeeding breach of the provision or as a waiver of the provision itself.\n\n17. Attorney's Fees. In the event either party is required to institute litigation to enforce any provision of this Agreement, the prevailing party in such litigation shall be entitled to recover all costs including without limitation, reasonable attorney's fees and expenses incurred in connection with such enforcement and collection.\n\n18. Venue. This Agreement is deemed to have been entered into in the State of Colorado, and its interpretation, construction, and the remedies for its enforcement or breach are to be applied pursuant to and in accordance with the laws of the State of Colorado.\n\n19. Notices. Any and all notices or other communication required or permitted to be given pursuant to this Agreement shall be in writing and shall be construed as properly given if mailed first class, postage prepaid to the address specified herein. Either party may designate, in writing, a change of address or other place to which notices may be sent.\n\n If to SIERRA: If to LIFEUSA/ENVISION: Mr. Michael Bentley Mr. Michael Schuett Sierra Mountain Minerals Inc. Envision Health, Inc. 1501 West Broadway, Suite 500 2475 Broadway, Suite 202 Vancouver BC V6J4Z6 Boulder, CO 80304 Canada\n\n20. Amendment. This Agreement shall not be modified or amended except by a written agreement executed by both parties.\n\n21. Entire Agreement. This Agreement constitutes the entire agreement between the parties with respect to the subject matter thereof and supersedes all prior agreements, whether written or oral.\n\n22. Assignment. The parties shall have the right to assign all, or part, of its rights under this Agreement to any wholly owned subsidiary or affiliate without the consent of the other Party. Any other assignment by the parties, requires the prior written consent of the other Party.\n\nACKNOWLEDGEMENTS\n\n Each party acknowledges that he or she has had an adequate opportunity to read and study this Agreement. The understanding of the aforesaid articles causes no difficulty whatsoever and each party has retained a copy of this agreement immediately after the signing of it by all parties.\n\n IN WITNESS WHEREOF, the parties have executed this Agreement effective as of the date and year first written above.\n\nSIERRA MOUNTAIN MINERALS LIFEUSA/ENVISION HEALTH\n\nBy: /s/ Michael Bentley By: /s/ Michael Schuett ----------------------- ------------------------- Michael Bentley Michael Schuett\n\n December 8, 2005 December 7, 2005 ----------------------- ------------------------------ Date Date",
"question": "Highlight the parts (if any) of this contract related to \"Third Party Beneficiary\" that should be reviewed by a lawyer. Details: Is there a non-contracting party who is a beneficiary to some or all of the clauses in the contract and therefore can enforce its rights against a contracting party?",
"answers.text": [
""
],
"answers.answer_start": [
-1
],
"feat_id": [
"LEGACYTECHNOLOGYHOLDINGS,INC_12_09_2005-EX-10.2-DISTRIBUTOR AGREEMENT__Third Party Beneficiary_0"
],
"feat_title": [
"LEGACYTECHNOLOGYHOLDINGS,INC_12_09_2005-EX-10.2-DISTRIBUTOR AGREEMENT"
]
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"context": "Value(dtype='string', id=None)",
"question": "Value(dtype='string', id=None)",
"answers.text": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"answers.answer_start": "Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None)",
"feat_id": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)",
"feat_title": "Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 16687 |
| valid | 4182 |
| adrienheymans/autotrain-data-csi5386 | [
"language:en",
"region:us"
] | 2022-11-06T15:30:45+00:00 | {"language": ["en"]} | 2022-11-07T00:44:12+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
| AutoTrain Dataset for project: csi5386
======================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project csi5386.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
968084f5cdec40cd12c2155cd044158d31819244 | ~ 15k logo images from LAION-5B have been rated for aesthetic preference ( preference_average ) and for how professional the design look ( professionalism_average ).
---
license: apache-2.0
---
| ChristophSchuhmann/aesthetic-logo-ratings | [
"region:us"
] | 2022-11-06T15:42:12+00:00 | {} | 2022-11-06T15:48:48+00:00 | [] | [] | TAGS
#region-us
| ~ 15k logo images from LAION-5B have been rated for aesthetic preference ( preference_average ) and for how professional the design look ( professionalism_average ).
---
license: apache-2.0
---
| [] | [
"TAGS\n#region-us \n"
] |
06ee653c1ee0c6de272a4e611792829d16d8dfcb | # Dataset Card for "InstaFoodSet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Dizex/InstaFoodSet | [
"region:us"
] | 2022-11-06T19:39:47+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "iob_tags", "sequence": "string"}, {"name": "iob_tags_ids", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 346804, "num_examples": 320}, {"name": "val", "num_bytes": 37219, "num_examples": 40}, {"name": "test", "num_bytes": 39352, "num_examples": 40}], "download_size": 84698, "dataset_size": 423375}} | 2022-12-11T20:07:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "InstaFoodSet"
More Information needed | [
"# Dataset Card for \"InstaFoodSet\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"InstaFoodSet\"\n\nMore Information needed"
] |
8093fd2c6a57407a7cac975c7e5525f1dd16a2e6 | This dataset contains more than 13,000 AI-generated key findings from scientific studies and industry reports about veganism, animal rights activism, marketing and other topics that may be useful for vegan businesses and animal rights activists.
We've made this dataset freely available so that it may benefit the wider movement as much as possible.
Each row in the CSV contains the title of the study, a link to the study and an AI-generated key finding from the study. Most key findings are a single sentence, while some are two or three, and all are written in natural, easy-to-understand language.
These AI-generated key findings were summarised from the abstracts of their respective studies using a combination of SciTLDR and our own specialised AI summarization model known as TLDR Vegan Studies, which is freely accessible here: https://huggingface.co/VEG3/TLDR-Vegan-Studies
There are some important limitations to consider before using this dataset. First, because each finding is generated by AI and not all have been manually approved by a human, there's no guarantee that 100% of the key findings generated are completely accurate. Second, there may be a bias in summary generation towards the kinds of results that can be found in the dataset used to generate the TLDR Vegan Studies model. Finally, because multiple different sources were used to collect studies for inclusion in this dataset, there are multiple key findings for the same study in many cases, and this may bias the overall dataset towards the result of studies that are more widely distributed.
We recommend using this dataset to get a broad overview of what the greater body of research says on the topics covered, rather than relying on it entirely to verify any particular factual claim. Depending on your use case, you might get the best results by deduplicating the dataset by title, URL and/or key finding before training any ML models on it. | VEG3/VeganStudySummaries | [
"region:us"
] | 2022-11-06T20:14:19+00:00 | {} | 2022-11-06T20:37:08+00:00 | [] | [] | TAGS
#region-us
| This dataset contains more than 13,000 AI-generated key findings from scientific studies and industry reports about veganism, animal rights activism, marketing and other topics that may be useful for vegan businesses and animal rights activists.
We've made this dataset freely available so that it may benefit the wider movement as much as possible.
Each row in the CSV contains the title of the study, a link to the study and an AI-generated key finding from the study. Most key findings are a single sentence, while some are two or three, and all are written in natural, easy-to-understand language.
These AI-generated key findings were summarised from the abstracts of their respective studies using a combination of SciTLDR and our own specialised AI summarization model known as TLDR Vegan Studies, which is freely accessible here: URL
There are some important limitations to consider before using this dataset. First, because each finding is generated by AI and not all have been manually approved by a human, there's no guarantee that 100% of the key findings generated are completely accurate. Second, there may be a bias in summary generation towards the kinds of results that can be found in the dataset used to generate the TLDR Vegan Studies model. Finally, because multiple different sources were used to collect studies for inclusion in this dataset, there are multiple key findings for the same study in many cases, and this may bias the overall dataset towards the result of studies that are more widely distributed.
We recommend using this dataset to get a broad overview of what the greater body of research says on the topics covered, rather than relying on it entirely to verify any particular factual claim. Depending on your use case, you might get the best results by deduplicating the dataset by title, URL and/or key finding before training any ML models on it. | [] | [
"TAGS\n#region-us \n"
] |
3a84e0922e7e92b3488088803eb370243c823307 | # Dataset Card for "captioned-cartoons"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juliaturc/captioned-cartoons | [
"region:us"
] | 2022-11-06T23:02:31+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22981331.0, "num_examples": 100}], "download_size": 22873699, "dataset_size": 22981331.0}} | 2022-11-08T03:09:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "captioned-cartoons"
More Information needed | [
"# Dataset Card for \"captioned-cartoons\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"captioned-cartoons\"\n\nMore Information needed"
] |
82cfe4739bc635408dd8bc09cb0185cae3e92398 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 123tarunanand/roberta-base-finetuned
* Dataset: cuad
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@adrienheymans](https://huggingface.co/adrienheymans) for evaluating this model. | autoevaluate/autoeval-eval-cuad-default-2fec59-2004766522 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T00:50:51+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cuad"], "eval_info": {"task": "extractive_question_answering", "model": "123tarunanand/roberta-base-finetuned", "metrics": ["recall"], "dataset_name": "cuad", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-07T01:26:47+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: 123tarunanand/roberta-base-finetuned
* Dataset: cuad
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @adrienheymans for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 123tarunanand/roberta-base-finetuned\n* Dataset: cuad\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @adrienheymans for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 123tarunanand/roberta-base-finetuned\n* Dataset: cuad\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @adrienheymans for evaluating this model."
] |
d740ca483f2af3a6b5cea2cba8c3662fb93021ad | # Dataset Card for "vlpr-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ThankGod/vlpr-dataset | [
"region:us"
] | 2022-11-07T03:45:43+00:00 | {"dataset_info": {"features": [{"name": "image_id", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "width", "dtype": "int64"}, {"name": "height", "dtype": "int64"}, {"name": "objects", "sequence": [{"name": "bbox_id", "dtype": "int64"}, {"name": "category", "dtype": {"class_label": {"names": {"0": "license_plate"}}}}, {"name": "bbox", "sequence": "float64", "length": 4}, {"name": "area", "dtype": "float64"}]}], "splits": [{"name": "train", "num_bytes": 9147825.0, "num_examples": 54}], "download_size": 9149130, "dataset_size": 9147825.0}} | 2022-11-17T08:06:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vlpr-dataset"
More Information needed | [
"# Dataset Card for \"vlpr-dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vlpr-dataset\"\n\nMore Information needed"
] |
0f70b23014485c74cb168659aeb4ae8b2bb9338a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-468e93-2011366585 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T06:34:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v3"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v3", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T06:35:59+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
0478bf1b7ee64012b862a64c61376ba8e4b81cef | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-468e93-2011366584 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T06:34:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v3"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v3", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T07:04:04+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
d3dcec73a9f84f887dd40da86b11926bd9c39ea8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-468e93-2011366588 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T06:34:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v3"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v3", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T06:38:23+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
9528cc5a986594568e09e1e68d994190c0016c39 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-468e93-2011366581 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T06:34:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v3"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v3", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T06:35:22+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
33d75241af98f80560bf0740ceccc7c6e8039c6e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-468e93-2011366582 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T06:34:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v3"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v3", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T07:45:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
5ce28162a971171ec4ebaa843086933f44514bdc | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-468e93-2011366587 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T06:34:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v3"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v3", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T06:41:03+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
d39f59b98fe3d3de23022816e0b7628e997be832 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-468e93-2011366586 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T06:34:30+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v3"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v3", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T06:50:47+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
c369a14220f7ffb54a945a9c116080200e449160 | # Dataset Card for "en-bg"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | popaqy/en-bg | [
"region:us"
] | 2022-11-07T07:41:12+00:00 | {"dataset_info": {"features": [{"name": "bg", "dtype": "string"}, {"name": "en", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 175001915, "num_examples": 408290}], "download_size": 82909795, "dataset_size": 175001915}} | 2022-11-07T07:43:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "en-bg"
More Information needed | [
"# Dataset Card for \"en-bg\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"en-bg\"\n\nMore Information needed"
] |
b9984b8d2a95e4a1879e1b071e9433858d0bc24a |
This dataset repository contains a subset of the UCF-101 dataset [1]. The subset archive was obtained using the code from [this guide](https://www.tensorflow.org/tutorials/load_data/video).
### References
[1] UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild, https://arxiv.org/abs/1212.0402. | sayakpaul/ucf101-subset | [
"license:apache-2.0",
"arxiv:1212.0402",
"region:us"
] | 2022-11-07T07:48:27+00:00 | {"license": "apache-2.0"} | 2022-12-19T09:51:35+00:00 | [
"1212.0402"
] | [] | TAGS
#license-apache-2.0 #arxiv-1212.0402 #region-us
|
This dataset repository contains a subset of the UCF-101 dataset [1]. The subset archive was obtained using the code from this guide.
### References
[1] UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild, URL | [
"### References\n \n [1] UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild, URL"
] | [
"TAGS\n#license-apache-2.0 #arxiv-1212.0402 #region-us \n",
"### References\n \n [1] UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild, URL"
] |
044daee39e83f3e8bbe83f1f3e90843b903b44b6 | # Dataset Card for "europarl-bg-en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | popaqy/europarl-bg-en | [
"region:us"
] | 2022-11-07T07:57:26+00:00 | {"dataset_info": {"features": [{"name": "bg", "dtype": "string"}, {"name": "en", "dtype": "string"}, {"name": "sentence_len", "dtype": "int64"}, {"name": "clear", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 178319272, "num_examples": 408290}], "download_size": 83310937, "dataset_size": 178319272}} | 2022-11-07T08:04:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "europarl-bg-en"
More Information needed | [
"# Dataset Card for \"europarl-bg-en\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"europarl-bg-en\"\n\nMore Information needed"
] |
d03b4dd788c7bcc417aa2bd9a43c2b58033a7bef | Based on the repository in https://github.com/bnitsan/PaperTweet/
Every entry in the dataset represents a Twitter thread written about a new paper on arXiv, likely by one of the original authors.
---
license: mit
---
| nitsanb/paper_tweet | [
"region:us"
] | 2022-11-07T09:02:56+00:00 | {} | 2022-11-07T09:39:31+00:00 | [] | [] | TAGS
#region-us
| Based on the repository in URL
Every entry in the dataset represents a Twitter thread written about a new paper on arXiv, likely by one of the original authors.
---
license: mit
---
| [] | [
"TAGS\n#region-us \n"
] |
5ffc27f405dd8765dc35fd678bce103e26403865 |
Rocks dataset with 7 classes: [Coal, Limestone, Marble, Sandstone, Quartzite, Basalt, Granite]
| udayl/rocks | [
"license:mit",
"region:us"
] | 2022-11-07T09:06:56+00:00 | {"license": "mit"} | 2022-11-07T09:15:20+00:00 | [] | [] | TAGS
#license-mit #region-us
|
Rocks dataset with 7 classes: [Coal, Limestone, Marble, Sandstone, Quartzite, Basalt, Granite]
| [] | [
"TAGS\n#license-mit #region-us \n"
] |
e91bfcac4e871fb739e6f0e277b2134f59ef13ec | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6f8c6a-2012266598 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T10:09:27+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
a55c0a73858f1bf4350e7d278f7f0eccbd1b3ef2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366608 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-350m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T09:55:00+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
f39559ec547386ac00c2d756fa3640ae5d7ce3ab | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6f8c6a-2012266596 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T13:20:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
d6a694e106fe23d4fb1b77906a54105c112c81f0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366603 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T05:49:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
a26f906a89a8ad319882e66c4536430682e10ef9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6f8c6a-2012266595 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T18:30:26+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
2dd1a45cc2633662ca009e6639e4da519cd7f273 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6f8c6a-2012266599 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T09:54:20+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
9dbf085c474f5e385751fe20b32ab88270e11553 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366604 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:07:15+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
61bb27e8f82641e484cbd89bfb3f2196646eeb58 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6f8c6a-2012266601 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:57+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-125m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T09:22:01+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-125m\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-125m\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
395f330c131afd21a1868c328e85328fc06b472d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366606 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T11:15:03+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
fb66d9176a74921eeaeffd525c3bb4d00fdb25e6 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366605 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T13:47:33+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
2a2d8bca11ab1639ce0caa5c5d1e97751433f6e2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6f8c6a-2012266597 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T11:54:59+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
c330f79de6bb67f52fb257ed995c2a14a85ca149 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6f8c6a-2012266594 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T06:00:32+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
696b517c95bf6aab100f547d938abef511687f86 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366607 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:13:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T10:34:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
015f444197ecc37c81070714ac1c329aad00fa35 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en-6f8c6a-2012266600 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:14:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-350m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T09:29:28+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: futin/guess
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: futin/guess\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
eb100ee78df88d98359981baece7dca4a77726df | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-4200fb-2012366609 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:30:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-125m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T09:54:16+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-125m\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-125m\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
039e2bcdb13add2922938792f533d7c83c15845d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466610 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T09:38:35+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T21:40:41+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
ed7ea0413ac649b9e948792bf8f2fcd3ae8de093 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466613 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T10:01:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T11:34:37+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
3d1373f9fc083be53a80c3a87ef813655f586585 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466612 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T10:02:01+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T12:21:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
a73c37eba883fdee7dea82ff92571db441a8f4da | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466611 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T10:02:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T15:23:23+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
e3834ddd9ef488efb339f1081015346d8fd868cf | ## Dataset description
Datataset containing SPOUT knotted (positive) and Rossmann unknotted (negative) proteins.
| EvaKlimentova/knots_SPOUTxRossmann | [
"region:us"
] | 2022-11-07T10:05:06+00:00 | {} | 2022-11-11T08:11:01+00:00 | [] | [] | TAGS
#region-us
| ## Dataset description
Datataset containing SPOUT knotted (positive) and Rossmann unknotted (negative) proteins.
| [
"## Dataset description\n\nDatataset containing SPOUT knotted (positive) and Rossmann unknotted (negative) proteins."
] | [
"TAGS\n#region-us \n",
"## Dataset description\n\nDatataset containing SPOUT knotted (positive) and Rossmann unknotted (negative) proteins."
] |
de90943f076255e5ccc9c5579999093ff86c57e3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466614 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T10:18:07+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T10:49:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
8379b1e2b6c1050bafc3368aff20b3c470bf270f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466615 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T10:42:26+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T11:05:11+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
a5d439de7d37530a429d374ca5d79ddfcf2c6746 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466616 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T10:57:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-350m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T11:06:14+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
e7718716717bb01209e1282f8d34a53b5e5e334e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-en_3-fcaae9-2012466617 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T11:13:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-125m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "en_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T11:17:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/guess
* Config: en_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-125m\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-125m\n* Dataset: futin/guess\n* Config: en_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
4d1fc63d115a7b05160a7dd57eef36033ac92013 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-6b1064-2012566618 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T11:13:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T09:39:15+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
25277e0705b169505aff30510add82e3fb10e7aa | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-6b1064-2012566619 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T11:23:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-30b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T23:30:24+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-30b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-30b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
72c86a7e2c7b3452e22da4b75005c6270d6563c2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-6b1064-2012566620 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T11:26:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T15:53:29+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
0b741a2cfe29293da11fd97f3de3928c6a9be645 | # Dataset Card for "petitions-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eminecg/petitions-ds-v1 | [
"region:us"
] | 2022-11-07T11:42:39+00:00 | {"dataset_info": {"features": [{"name": "petition", "dtype": "string"}, {"name": "petition_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 30642006.6, "num_examples": 2484}, {"name": "validation", "num_bytes": 3404667.4, "num_examples": 276}], "download_size": 15766696, "dataset_size": 34046674.0}} | 2022-11-07T15:13:37+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "petitions-ds"
More Information needed | [
"# Dataset Card for \"petitions-ds\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"petitions-ds\"\n\nMore Information needed"
] |
6fb735b46951e0d9c3a5fac8e26228b4f39c0c3a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-6b1064-2012566621 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T11:42:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T14:15:29+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
55817960b45bea6f432f3dfb94c0ebdc39a1f078 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-6b1064-2012566622 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T12:03:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T13:10:12+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
c6dfdee3276b2433a65ab83b4e3e31fc0c7d39a0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-6b1064-2012566623 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T12:29:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T13:13:35+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
fec8f5bb1ea4e8f2cc868c685c1873deb78d2712 |
# Dataset Card for laion2B-multi-turkish-subset
## Dataset Description
- **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/)
- **Huggingface:** [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
- **Point of Contact:** [mcemilg](mailto:[email protected])
### Dataset Summary
[LAION-5B](https://laion.ai/blog/laion-5b/) is a large scale openly accessible image-text dataset contains text from multiple languages. This is a Turkish subset data of [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi). It's compatible to be used with [image2dataset](https://github.com/rom1504/img2dataset) to fetch the images at scale.
### Data Structure
```python
DatasetDict({
train: Dataset({
features: ['SAMPLE_ID', 'URL', 'TEXT', 'HEIGHT', 'WIDTH', 'LICENSE', 'LANGUAGE', 'NSFW', 'similarity'],
num_rows: 34638627
})
})
```
```python
{
'SAMPLE_ID': Value(dtype='int64', id=None),
'URL': Value(dtype='string', id=None),
'TEXT': Value(dtype='string', id=None),
'HEIGHT': Value(dtype='int64', id=None),
'WIDTH': Value(dtype='int64', id=None),
'LICENSE': Value(dtype='string', id=None),
'LANGUAGE': Value(dtype='string', id=None),
'NSFW': Value(dtype='string', id=None),
'similarity': Value(dtype='float64', id=None)
}
```
### Notes
The data was basically processed to drop non-Turkish and irrelevant texts before published. Both [FastText](https://fasttext.cc/docs/en/language-identification.html) and [langdetect](https://pypi.org/project/langdetect/) libraries were used to identify if the text is Turkish or not. The cleaning process can be summarized as follows:
- replace \"\"\" with empty str
- remove URLs in texts
- Drop if both FastText and LangDetect are highly confident with there is no Turkish in text.
- Drop empty text fields.
### License
CC-BY-4.0
| mcemilg/laion2B-multi-turkish-subset | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"language:tr",
"license:cc-by-4.0",
"region:us"
] | 2022-11-07T13:05:52+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["tr"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "task_categories": ["text-to-image", "image-to-text"], "pretty_name": "laion2B-multi-turkish-subset"} | 2022-11-08T05:47:01+00:00 | [] | [
"tr"
] | TAGS
#task_categories-text-to-image #task_categories-image-to-text #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #language-Turkish #license-cc-by-4.0 #region-us
|
# Dataset Card for laion2B-multi-turkish-subset
## Dataset Description
- Homepage: laion-5b
- Huggingface: laion/laion2B-multi
- Point of Contact: mcemilg
### Dataset Summary
LAION-5B is a large scale openly accessible image-text dataset contains text from multiple languages. This is a Turkish subset data of laion/laion2B-multi. It's compatible to be used with image2dataset to fetch the images at scale.
### Data Structure
### Notes
The data was basically processed to drop non-Turkish and irrelevant texts before published. Both FastText and langdetect libraries were used to identify if the text is Turkish or not. The cleaning process can be summarized as follows:
- replace \"\"\" with empty str
- remove URLs in texts
- Drop if both FastText and LangDetect are highly confident with there is no Turkish in text.
- Drop empty text fields.
### License
CC-BY-4.0
| [
"# Dataset Card for laion2B-multi-turkish-subset",
"## Dataset Description\n\n- Homepage: laion-5b\n- Huggingface: laion/laion2B-multi\n- Point of Contact: mcemilg",
"### Dataset Summary\n\nLAION-5B is a large scale openly accessible image-text dataset contains text from multiple languages. This is a Turkish subset data of laion/laion2B-multi. It's compatible to be used with image2dataset to fetch the images at scale.",
"### Data Structure",
"### Notes\n\nThe data was basically processed to drop non-Turkish and irrelevant texts before published. Both FastText and langdetect libraries were used to identify if the text is Turkish or not. The cleaning process can be summarized as follows:\n\n- replace \\\"\\\"\\\" with empty str\n- remove URLs in texts\n- Drop if both FastText and LangDetect are highly confident with there is no Turkish in text.\n- Drop empty text fields.",
"### License\nCC-BY-4.0"
] | [
"TAGS\n#task_categories-text-to-image #task_categories-image-to-text #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10M<n<100M #language-Turkish #license-cc-by-4.0 #region-us \n",
"# Dataset Card for laion2B-multi-turkish-subset",
"## Dataset Description\n\n- Homepage: laion-5b\n- Huggingface: laion/laion2B-multi\n- Point of Contact: mcemilg",
"### Dataset Summary\n\nLAION-5B is a large scale openly accessible image-text dataset contains text from multiple languages. This is a Turkish subset data of laion/laion2B-multi. It's compatible to be used with image2dataset to fetch the images at scale.",
"### Data Structure",
"### Notes\n\nThe data was basically processed to drop non-Turkish and irrelevant texts before published. Both FastText and langdetect libraries were used to identify if the text is Turkish or not. The cleaning process can be summarized as follows:\n\n- replace \\\"\\\"\\\" with empty str\n- remove URLs in texts\n- Drop if both FastText and LangDetect are highly confident with there is no Turkish in text.\n- Drop empty text fields.",
"### License\nCC-BY-4.0"
] |
fc5403fde3fa41ff2746b053fc6c21bb2e4082fb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-6b1064-2012566624 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T13:19:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-350m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T13:43:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-350m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-350m\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
40799b6c0e33e5987c90fa0dab4f9d9b903d09d2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi_3-6b1064-2012566625 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T13:21:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-125m", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi_3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T13:34:54+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-125m
* Dataset: futin/guess
* Config: vi_3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-125m\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-125m\n* Dataset: futin/guess\n* Config: vi_3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
b93dc0317cb147a3c53de17c629714518effba9e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v3-math-237e7b-2016766699 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:08:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v3"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v3", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v3", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:44:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3
* Config: mathemakitten--winobias_antistereotype_test_cot_v3
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v3\n* Config: mathemakitten--winobias_antistereotype_test_cot_v3\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
54a392875563c471178438637212a270361715b3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866701 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:36+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:46:24+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
0d4d16e9ffdb156fdc3ece80942469517125c43a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866704 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:25:57+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
fed472106d3a2aa869b81140bec2dedebebaeb64 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866705 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:21:51+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
9cc646a9fac38deb1980f415a056a9cdc7992cdb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866706 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:21:20+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
5ad77af9bbbb64d8e091b6d2a1bb0d5be78e3ec6 |
<h4> Usage </h4>
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
<em style="font-weight:600">art by slime_style </em>
add <b>[ ]</b> around it to reduce its weight.
<h4> Included Files </h4>
<ul>
<li>6500 steps <em>Usage: art by slime_style-6500</em></li>
<li>10,000 steps <em>Usage: art by slime_style</em> </li>
</ul>
cheers<br>
Wipeout
<h4> Example Pictures </h4>
<table>
<tbody>
<tr>
<td><img height="100%/" width="100%" src="https://i.imgur.com/UU8lUKN.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/mrU4Ldw.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/TQEAKEa.png"></td>
<td><img height="100%/" width="100%" src="https://i.imgur.com/gzRxFFd.png"></td>
</tr>
</tbody>
</table>
<h4> prompt comparison </h4>
<em> click the image to enlarge</em>
<a href="https://i.imgur.com/hHah7Dt.jpg" target="_blank"><img height="50%" width="50%" src="https://i.imgur.com/hHah7Dt.jpg"></a>
| zZWipeoutZz/slime_style | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-07T17:20:43+00:00 | {"license": "creativeml-openrail-m"} | 2022-11-07T17:33:39+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
| #### Usage
To use this embedding you have to download the file and put it into the "\stable-diffusion-webui\embeddings" folder
To use it in a prompt add
*art by slime\_style*
add **[ ]** around it to reduce its weight.
#### Included Files
* 6500 steps *Usage: art by slime\_style-6500*
* 10,000 steps *Usage: art by slime\_style*
cheers
Wipeout
#### Example Pictures
#### prompt comparison
*click the image to enlarge*
[<img height="50%" width="50%" src="https://i.URL](https://i.URL target=) | [
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by slime\\_style* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by slime\\_style-6500*\n* 10,000 steps *Usage: art by slime\\_style*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n *click the image to enlarge*\n[<img height=\"50%\" width=\"50%\" src=\"https://i.URL](https://i.URL target=)"
] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n",
"#### Usage\n\n\nTo use this embedding you have to download the file and put it into the \"\\stable-diffusion-webui\\embeddings\" folder\nTo use it in a prompt add\n*art by slime\\_style* \n\n\nadd **[ ]** around it to reduce its weight.",
"#### Included Files\n\n\n* 6500 steps *Usage: art by slime\\_style-6500*\n* 10,000 steps *Usage: art by slime\\_style*\n\n\ncheers \n\nWipeout",
"#### Example Pictures",
"#### prompt comparison\n\n\n *click the image to enlarge*\n[<img height=\"50%\" width=\"50%\" src=\"https://i.URL](https://i.URL target=)"
] |
6a799ab10990312cf80f0d1eeb3eafbbc18eee6b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866703 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:36:14+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
428e91185514f77f81566ac2d1e269edbd5554fe | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866702 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:20:48+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T18:18:12+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
189ad8662fdb96cd19ce86ada7d8eabde2d69247 |
# Dataset Card for "LexFiles"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/coastalcph/lexlms
- **Paper:** https://arxiv.org/abs/xxx
- **Point of Contact:** [Ilias Chalkidis](mailto:[email protected])
### Dataset Summary
**Disclaimer: This is a pre-proccessed version of the LexFiles corpus (https://huggingface.co/datasets/lexlms/lexfiles), where documents are pre-split in chunks of 512 tokens.**
The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).
The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent.
### Dataset Specifications
| Corpus | Corpus alias | Documents | Tokens | Pct. | Sampl. (a=0.5) | Sampl. (a=0.2) |
|-----------------------------------|----------------------|-----------|--------|--------|----------------|----------------|
| EU Legislation | `eu-legislation` | 93.7K | 233.7M | 1.2% | 5.0% | 8.0% |
| EU Court Decisions | `eu-court-cases` | 29.8K | 178.5M | 0.9% | 4.3% | 7.6% |
| ECtHR Decisions | `ecthr-cases` | 12.5K | 78.5M | 0.4% | 2.9% | 6.5% |
| UK Legislation | `uk-legislation` | 52.5K | 143.6M | 0.7% | 3.9% | 7.3% |
| UK Court Decisions | `uk-court-cases` | 47K | 368.4M | 1.9% | 6.2% | 8.8% |
| Indian Court Decisions | `indian-court-cases` | 34.8K | 111.6M | 0.6% | 3.4% | 6.9% |
| Canadian Legislation | `canadian-legislation` | 6K | 33.5M | 0.2% | 1.9% | 5.5% |
| Canadian Court Decisions | `canadian-court-cases` | 11.3K | 33.1M | 0.2% | 1.8% | 5.4% |
| U.S. Court Decisions [1] | `court-listener` | 4.6M | 11.4B | 59.2% | 34.7% | 17.5% |
| U.S. Legislation | `us-legislation` | 518 | 1.4B | 7.4% | 12.3% | 11.5% |
| U.S. Contracts | `us-contracts` | 622K | 5.3B | 27.3% | 23.6% | 15.0% |
| Total | `lexlms/lexfiles` | 5.8M | 18.8B | 100% | 100% | 100% |
[1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents.
[2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019).
Additional corpora not considered for pre-training, since they do not represent factual legal knowledge.
| Corpus | Corpus alias | Documents | Tokens |
|----------------------------------------|------------------------|-----------|--------|
| Legal web pages from C4 | `legal-c4` | 284K | 340M |
### Citation
[*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*](https://aclanthology.org/xxx/)
```
@inproceedings{chalkidis-garneau-etal-2023-lexlms,
title = {{LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development}},
author = "Chalkidis*, Ilias and
Garneau*, Nicolas and
Goanta, Catalina and
Katz, Daniel Martin and
Søgaard, Anders",
booktitle = "Proceedings of the 61h Annual Meeting of the Association for Computational Linguistics",
month = june,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/xxx",
}
``` | lexlms/lex_files_preprocessed | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:extended",
"language:en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2022-11-07T17:27:54+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "LexFiles", "configs": ["eu_legislation", "eu_court_cases", "uk_legislation", "uk_court_cases", "us_legislation", "us_court_cases", "us_contracts", "canadian_legislation", "canadian_court_cases", "indian_court_cases"]} | 2023-05-10T15:01:44+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended #language-English #license-cc-by-nc-sa-4.0 #region-us
| Dataset Card for "LexFiles"
===========================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Dataset Specifications
Dataset Description
-------------------
* Homepage: URL
* Repository: URL
* Paper: URL
* Point of Contact: Ilias Chalkidis
### Dataset Summary
Disclaimer: This is a pre-proccessed version of the LexFiles corpus (URL where documents are pre-split in chunks of 512 tokens.
The LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).
The corpus contains approx. 19 billion tokens. In comparison, the "Pile of Law" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent.
### Dataset Specifications
[1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents.
[2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019).
Additional corpora not considered for pre-training, since they do not represent factual legal knowledge.
*Ilias Chalkidis\*, Nicolas Garneau\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*
*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*
*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*
| [
"### Dataset Summary\n\n\nDisclaimer: This is a pre-proccessed version of the LexFiles corpus (URL where documents are pre-split in chunks of 512 tokens.\n\n\nThe LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).\nThe corpus contains approx. 19 billion tokens. In comparison, the \"Pile of Law\" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent.",
"### Dataset Specifications\n\n\n\n[1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents.\n\n\n[2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019).\n\n\nAdditional corpora not considered for pre-training, since they do not represent factual legal knowledge.\n\n\n\n*Ilias Chalkidis\\*, Nicolas Garneau\\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*\n*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*\n*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*"
] | [
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-extended #language-English #license-cc-by-nc-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nDisclaimer: This is a pre-proccessed version of the LexFiles corpus (URL where documents are pre-split in chunks of 512 tokens.\n\n\nThe LeXFiles is a new diverse English multinational legal corpus that we created including 11 distinct sub-corpora that cover legislation and case law from 6 primarily English-speaking legal systems (EU, CoE, Canada, US, UK, India).\nThe corpus contains approx. 19 billion tokens. In comparison, the \"Pile of Law\" corpus released by Hendersons et al. (2022) comprises 32 billion in total, where the majority (26/30) of sub-corpora come from the United States of America (USA), hence the corpus as a whole is biased towards the US legal system in general, and the federal or state jurisdiction in particular, to a significant extent.",
"### Dataset Specifications\n\n\n\n[1] We consider only U.S. Court Decisions from 1965 onwards (cf. post Civil Rights Act), as a hard threshold for cases relying on severely out-dated and in many cases harmful law standards. The rest of the corpora include more recent documents.\n\n\n[2] Sampling (Sampl.) ratios are computed following the exponential sampling introduced by Lample et al. (2019).\n\n\nAdditional corpora not considered for pre-training, since they do not represent factual legal knowledge.\n\n\n\n*Ilias Chalkidis\\*, Nicolas Garneau\\*, Catalina E.C. Goanta, Daniel Martin Katz, and Anders Søgaard.*\n*LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development.*\n*2022. In the Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics. Toronto, Canada.*"
] |
97c8c45d205a5f24baddf626f6ed04ecc306b5d3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v2-math-db74ac-2016866707 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T17:32:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v2"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v2", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v2", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T17:35:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2
* Config: mathemakitten--winobias_antistereotype_test_cot_v2
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v2\n* Config: mathemakitten--winobias_antistereotype_test_cot_v2\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
516a6484ceb9cb23fead0f0cf5de86fd8ff963d7 | # Dataset Card for "petitions-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eminecg/petitions-ds-v2 | [
"region:us"
] | 2022-11-07T18:13:34+00:00 | {"dataset_info": {"features": [{"name": "petition", "dtype": "string"}, {"name": "petition_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 29426840.1, "num_examples": 2475}, {"name": "validation", "num_bytes": 3269648.9, "num_examples": 275}], "download_size": 14382239, "dataset_size": 32696489.0}} | 2022-11-07T18:13:42+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "petitions-ds"
More Information needed | [
"# Dataset Card for \"petitions-ds\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"petitions-ds\"\n\nMore Information needed"
] |
cfab6adcb824f395dbd46ffc3001ffd38128460d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squadshifts
* Config: amazon
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@viralshanker](https://huggingface.co/viralshanker) for evaluating this model. | autoevaluate/autoeval-eval-squadshifts-amazon-74b272-2017966728 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:22:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squadshifts"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/electra-base-squad2", "metrics": [], "dataset_name": "squadshifts", "dataset_config": "amazon", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-07T19:25:09+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squadshifts
* Config: amazon
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @viralshanker for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/electra-base-squad2\n* Dataset: squadshifts\n* Config: amazon\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @viralshanker for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/electra-base-squad2\n* Dataset: squadshifts\n* Config: amazon\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @viralshanker for evaluating this model."
] |
b3b3a3a62ed04b6266acae69125216bef32bd040 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2
* Dataset: squadshifts
* Config: amazon
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@viralshanker](https://huggingface.co/viralshanker) for evaluating this model. | autoevaluate/autoeval-eval-squadshifts-amazon-74b272-2017966729 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:22:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squadshifts"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2", "metrics": [], "dataset_name": "squadshifts", "dataset_config": "amazon", "dataset_split": "test", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-11-07T19:25:00+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2
* Dataset: squadshifts
* Config: amazon
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @viralshanker for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squadshifts\n* Config: amazon\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @viralshanker for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squadshifts\n* Config: amazon\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @viralshanker for evaluating this model."
] |
164c4c6b01f6ff2ac4b09b235de473bfdddfda9f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366741 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:42:58+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
9efbd406fbdcbaf43407452647359dc896d07380 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366738 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:57:18+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
07e6422317e8f235ab7f946475ab17fa72af8e70 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366742 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:14+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:45:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
dcc31dc3fb1f09771fec8b7bdade475b26fd584b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366736 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T20:07:18+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
0cf63e77119d0f0f992ebe49e450133ef24cace4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366735 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T21:41:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
b048e92848d7f9125b7c70cbafa2ec4c50b0864e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366739 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T20:37:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
480460c2c7aee0e610f719a6018cf6d78fbb0701 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366740 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:42:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:47:10+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
fadefe3f12997cab6f12c63824d313a0a76c889d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_cot_v4-math-54ae93-2018366737 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-07T19:44:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_cot_v4"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_cot_v4", "dataset_config": "mathemakitten--winobias_antistereotype_test_cot_v4", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-07T19:45:39+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4
* Config: mathemakitten--winobias_antistereotype_test_cot_v4
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_cot_v4\n* Config: mathemakitten--winobias_antistereotype_test_cot_v4\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
25e7626c126613c2898bd29f8cb101e410fee989 | # Dataset Card for "olm-october-2022-tokenized-olm-bert-base-uncased"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Tristan/olm-october-2022-tokenized | [
"region:us"
] | 2022-11-08T04:52:36+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 84051313200.0, "num_examples": 23347587}], "download_size": 21176572924, "dataset_size": 84051313200.0}} | 2022-11-08T07:58:59+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "olm-october-2022-tokenized-olm-bert-base-uncased"
More Information needed | [
"# Dataset Card for \"olm-october-2022-tokenized-olm-bert-base-uncased\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"olm-october-2022-tokenized-olm-bert-base-uncased\"\n\nMore Information needed"
] |
a3e6a10b65441edae7f8f1de9f20eec218082d20 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__random-en-805a17-2021966768 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-08T04:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/random"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-6.7b", "metrics": [], "dataset_name": "futin/random", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T07:38:50+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-6.7b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/random\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-6.7b\n* Dataset: futin/random\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
42fda3c0d1ef504e2c100f16288a4da9e7a082b8 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__random-en-805a17-2021966769 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-08T04:59:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/random"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-2.7b", "metrics": [], "dataset_name": "futin/random", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T05:54:50+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-2.7b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/random\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-2.7b\n* Dataset: futin/random\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
98684aeb6f743727a96594d3fe2d5f5c0a3fc0c1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__random-en-805a17-2021966770 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-08T04:59:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/random"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-1.3b", "metrics": [], "dataset_name": "futin/random", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T05:39:34+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-1.3b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/random\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-1.3b\n* Dataset: futin/random\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
3c9caa2f2f6960711e7f4d2e800581def2b6c183 |
# Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation
## Dataset Description
- **Repository:** [https://github.com/AbhilashaRavichander/CondaQA](https://github.com/AbhilashaRavichander/CondaQA)
- **Paper:** [https://arxiv.org/abs/2211.00295](https://arxiv.org/abs/2211.00295)
- **Point of Contact:** [email protected]
## Dataset Summary
Data from the EMNLP 2022 paper by Ravichander et al.: "CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation".
If you use this dataset, we would appreciate you citing our work:
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
```
From the paper: "We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues."
### Supported Tasks and Leaderboards
The task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.
### Language
English
## Dataset Structure
### Data Instances
Here's an example instance:
```
{"QuestionID": "q10",
"original cue": "rarely",
"PassageEditID": 0,
"original passage": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws.",
"SampleID": 5294,
"label": "YES",
"original sentence": "Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time.",
"sentence2": "If a drug addict is caught with marijuana, is there a chance he will be jailed?",
"PassageID": 444,
"sentence1": "Drug possession is the crime of having one or more illegal drugs in one's possession, either for personal use, distribution, sale or otherwise. Illegal drugs fall into different categories and sentences vary depending on the amount, type of drug, circumstances, and jurisdiction. In the U.S., the penalty for illegal drug possession and sale can vary from a small fine to a prison sentence. In some states, marijuana possession is considered to be a petty offense, with the penalty being comparable to that of a speeding violation. In some municipalities, possessing a small quantity of marijuana in one's own home is not punishable at all. Generally, however, drug possession is an arrestable offense, although first-time offenders rarely serve jail time. Federal law makes even possession of \"soft drugs\", such as cannabis, illegal, though some local governments have laws contradicting federal laws."
}
```
### Data Fields
* `QuestionID`: unique ID for this question (might be asked for multiple passages)
* `original cue`: Negation cue that was used to select this passage from Wikipedia
* `PassageEditID`: 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage
* `original passage`: Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)
* `SampleID`: unique ID for this passage-question pair
* `label`: answer
* `original sentence`: Sentence that contains the negated statement
* `sentence2`: question
* `PassageID`: unique ID for the Wikipedia passage
* `sentence1`: passage
### Data Splits
Data splits can be accessed as:
```
from datasets import load_dataset
train_set = load_dataset("condaqa", "train")
dev_set = load_dataset("condaqa", "dev")
test_set = load_dataset("condaqa", "test")
```
## Dataset Creation
Full details are in the paper.
### Curation Rationale
From the paper: "Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset:
1. The dataset should include a wide variety of negation cues, not just negative particles.
2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019).
3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions.
4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope."
### Source Data
From the paper: "To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation."
"We use negation cues from [Morante et al. (2011)](https://aclanthology.org/L12-1077/) and [van Son et al. (2016)](https://aclanthology.org/W16-5007/) as a starting point which we extend."
#### Initial Data Collection and Normalization
We show ten passages to crowdworkers and allow them to choose a passage they would like to work on.
#### Who are the source language producers?
Original passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers.
### Annotations
#### Annotation process
From the paper: "In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages."
Full details are in the paper.
#### Who are the annotators?
From the paper: "Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task." We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations.
### Personal and Sensitive Information
We expect that such information has already been redacted from Wikipedia.
## Considerations for Using the Data
### Social Impact of Dataset
A model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders.
### Discussion of Biases
We are not aware of societal biases that are exhibited in this dataset.
### Other Known Limitations
From the paper: "Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study."
## Additional Information
### Dataset Curators
From the paper: "In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets." The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months.
### Licensing Information
license: apache-2.0
### Citation Information
```
@inproceedings{ravichander-et-al-2022-condaqa,
title={CONDAQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation},
author={Ravichander, Abhilasha and Gardner, Matt and Marasovi\'{c}, Ana},
proceedings={EMNLP 2022},
year={2022}
}
``` | lasha-nlp/CONDAQA | [
"task_categories:question-answering",
"annotations_creators:crowdsourced",
"language_creators:found",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"negation",
"reading comprehension",
"arxiv:2211.00295",
"region:us"
] | 2022-11-08T05:41:56+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found", "crowdsourced"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": [], "pretty_name": "condaqa", "tags": ["negation", "reading comprehension"]} | 2022-11-08T07:04:12+00:00 | [
"2211.00295"
] | [
"en"
] | TAGS
#task_categories-question-answering #annotations_creators-crowdsourced #language_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #negation #reading comprehension #arxiv-2211.00295 #region-us
|
# Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation
## Dataset Description
- Repository: URL
- Paper: URL
- Point of Contact: aravicha@URL
## Dataset Summary
Data from the EMNLP 2022 paper by Ravichander et al.: "CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation".
If you use this dataset, we would appreciate you citing our work:
From the paper: "We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues."
### Supported Tasks and Leaderboards
The task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.
### Language
English
## Dataset Structure
### Data Instances
Here's an example instance:
### Data Fields
* 'QuestionID': unique ID for this question (might be asked for multiple passages)
* 'original cue': Negation cue that was used to select this passage from Wikipedia
* 'PassageEditID': 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage
* 'original passage': Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)
* 'SampleID': unique ID for this passage-question pair
* 'label': answer
* 'original sentence': Sentence that contains the negated statement
* 'sentence2': question
* 'PassageID': unique ID for the Wikipedia passage
* 'sentence1': passage
### Data Splits
Data splits can be accessed as:
## Dataset Creation
Full details are in the paper.
### Curation Rationale
From the paper: "Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset:
1. The dataset should include a wide variety of negation cues, not just negative particles.
2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019).
3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions.
4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope."
### Source Data
From the paper: "To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation."
"We use negation cues from Morante et al. (2011) and van Son et al. (2016) as a starting point which we extend."
#### Initial Data Collection and Normalization
We show ten passages to crowdworkers and allow them to choose a passage they would like to work on.
#### Who are the source language producers?
Original passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers.
### Annotations
#### Annotation process
From the paper: "In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages."
Full details are in the paper.
#### Who are the annotators?
From the paper: "Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task." We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations.
### Personal and Sensitive Information
We expect that such information has already been redacted from Wikipedia.
## Considerations for Using the Data
### Social Impact of Dataset
A model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders.
### Discussion of Biases
We are not aware of societal biases that are exhibited in this dataset.
### Other Known Limitations
From the paper: "Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study."
## Additional Information
### Dataset Curators
From the paper: "In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets." The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months.
### Licensing Information
license: apache-2.0
| [
"# Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation",
"## Dataset Description\n\n- Repository: URL\n- Paper: URL \n- Point of Contact: aravicha@URL",
"## Dataset Summary\n\nData from the EMNLP 2022 paper by Ravichander et al.: \"CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation\". \n\nIf you use this dataset, we would appreciate you citing our work:\n \n\n \nFrom the paper: \"We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues.\"",
"### Supported Tasks and Leaderboards \n\nThe task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.",
"### Language \nEnglish",
"## Dataset Structure",
"### Data Instances\nHere's an example instance:",
"### Data Fields\n\n* 'QuestionID': unique ID for this question (might be asked for multiple passages)\n* 'original cue': Negation cue that was used to select this passage from Wikipedia\n* 'PassageEditID': 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage\n* 'original passage': Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)\n* 'SampleID': unique ID for this passage-question pair\n* 'label': answer \n* 'original sentence': Sentence that contains the negated statement\n* 'sentence2': question\n* 'PassageID': unique ID for the Wikipedia passage\n* 'sentence1': passage",
"### Data Splits\n\nData splits can be accessed as:",
"## Dataset Creation\n\nFull details are in the paper.",
"### Curation Rationale\n\nFrom the paper: \"Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset:\n1. The dataset should include a wide variety of negation cues, not just negative particles. \n2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019).\n3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions.\n4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope.\"",
"### Source Data\n\nFrom the paper: \"To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation.\"\n\n\"We use negation cues from Morante et al. (2011) and van Son et al. (2016) as a starting point which we extend.\"",
"#### Initial Data Collection and Normalization\n\nWe show ten passages to crowdworkers and allow them to choose a passage they would like to work on.",
"#### Who are the source language producers?\n\nOriginal passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers.",
"### Annotations",
"#### Annotation process\n\nFrom the paper: \"In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages.\"\n\nFull details are in the paper.",
"#### Who are the annotators?\n\nFrom the paper: \"Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task.\" We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations.",
"### Personal and Sensitive Information\n\nWe expect that such information has already been redacted from Wikipedia.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nA model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders.",
"### Discussion of Biases\n\nWe are not aware of societal biases that are exhibited in this dataset.",
"### Other Known Limitations\n\nFrom the paper: \"Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study.\"",
"## Additional Information",
"### Dataset Curators\n\nFrom the paper: \"In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets.\" The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months.",
"### Licensing Information\n\nlicense: apache-2.0"
] | [
"TAGS\n#task_categories-question-answering #annotations_creators-crowdsourced #language_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-apache-2.0 #negation #reading comprehension #arxiv-2211.00295 #region-us \n",
"# Dataset Card for CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation",
"## Dataset Description\n\n- Repository: URL\n- Paper: URL \n- Point of Contact: aravicha@URL",
"## Dataset Summary\n\nData from the EMNLP 2022 paper by Ravichander et al.: \"CondaQA: A Contrastive Reading Comprehension Dataset for Reasoning about Negation\". \n\nIf you use this dataset, we would appreciate you citing our work:\n \n\n \nFrom the paper: \"We introduce CondaQA to facilitate the future development of models that can process negation effectively. This is the first English reading comprehension dataset which requires reasoning about the implications of negated statements in paragraphs. We collect paragraphs with diverse negation cues, then have crowdworkers ask questions about the _implications_ of the negated statement in the passage. We also have workers make three kinds of edits to the passage---paraphrasing the negated statement, changing the scope of the negation, and reversing the negation---resulting in clusters of question-answer pairs that are difficult for models to answer with spurious shortcuts. CondaQA features 14,182 question-answer pairs with over 200 unique negation cues.\"",
"### Supported Tasks and Leaderboards \n\nThe task is to answer a question given a Wikipedia passage that includes something being negated. There is no official leaderboard.",
"### Language \nEnglish",
"## Dataset Structure",
"### Data Instances\nHere's an example instance:",
"### Data Fields\n\n* 'QuestionID': unique ID for this question (might be asked for multiple passages)\n* 'original cue': Negation cue that was used to select this passage from Wikipedia\n* 'PassageEditID': 0 = original passage, 1 = paraphrase-edit passage, 2 = scope-edit passage, 3 = affirmative-edit passage\n* 'original passage': Original Wikipedia passage the passage is based on (note that the passage might either be the original Wikipedia passage itself, or an edit based on it)\n* 'SampleID': unique ID for this passage-question pair\n* 'label': answer \n* 'original sentence': Sentence that contains the negated statement\n* 'sentence2': question\n* 'PassageID': unique ID for the Wikipedia passage\n* 'sentence1': passage",
"### Data Splits\n\nData splits can be accessed as:",
"## Dataset Creation\n\nFull details are in the paper.",
"### Curation Rationale\n\nFrom the paper: \"Our goal is to evaluate models on their ability to process the contextual implications of negation. We have the following desiderata for our question-answering dataset:\n1. The dataset should include a wide variety of negation cues, not just negative particles. \n2. Questions should be targeted towards the _implications_ of a negated statement, rather than the factual content of what was or wasn't negated, to remove common sources of spurious cues in QA datasets (Kaushik and Lipton, 2018; Naik et al., 2018; McCoy et al., 2019).\n3. Questions should come in closely-related, contrastive groups, to further reduce the possibility of models' reliance on spurious cues in the data (Gardner et al., 2020). This will result in sets of passages that are similar to each other in terms of the words that they contain, but that may admit different answers to questions.\n4. Questions should probe the extent to which models are sensitive to how the negation is expressed. In order to do this, there should be contrasting passages that differ only in their negation cue or its scope.\"",
"### Source Data\n\nFrom the paper: \"To construct CondaQA, we first collected passages from a July 2021 version of English Wikipedia that contained negation cues, including single- and multi-word negation phrases, as well as affixal negation.\"\n\n\"We use negation cues from Morante et al. (2011) and van Son et al. (2016) as a starting point which we extend.\"",
"#### Initial Data Collection and Normalization\n\nWe show ten passages to crowdworkers and allow them to choose a passage they would like to work on.",
"#### Who are the source language producers?\n\nOriginal passages come from volunteers who contribute to Wikipedia. Passage edits, questions, and answers are produced by crowdworkers.",
"### Annotations",
"#### Annotation process\n\nFrom the paper: \"In the first stage of the task, crowdworkers made three types of modifications to the original passage: (1) they paraphrased the negated statement, (2) they modified the scope of the negated statement (while retaining the negation cue), and (3) they undid the negation. In the second stage, we instruct crowdworkers to ask challenging questions about the implications of the negated statement. The crowdworkers then answered the questions they wrote previously for the original and edited passages.\"\n\nFull details are in the paper.",
"#### Who are the annotators?\n\nFrom the paper: \"Candidates took a qualification exam which consisted of 12 multiple-choice questions that evaluated comprehension of the instructions. We recruit crowdworkers who answer >70% of the questions correctly for the next stage of the dataset construction task.\" We use the CrowdAQ platform for the exam and Amazon Mechanical Turk for annotations.",
"### Personal and Sensitive Information\n\nWe expect that such information has already been redacted from Wikipedia.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nA model that solves this dataset might be (mis-)represented as an evidence that the model understands the entirety of English language and consequently deployed where it will have immediate and/or downstream impact on stakeholders.",
"### Discussion of Biases\n\nWe are not aware of societal biases that are exhibited in this dataset.",
"### Other Known Limitations\n\nFrom the paper: \"Though CondaQA currently represents the largest NLU dataset that evaluates a model’s ability to process the implications of negation statements, it is possible to construct a larger dataset, with more examples spanning different answer types. Further CONDAQA is an English dataset, and it would be useful to extend our data collection procedures to build high-quality resources in other languages. Finally, while we attempt to extensively measure and control for artifacts in our dataset, it is possible that our dataset has hidden artifacts that we did not study.\"",
"## Additional Information",
"### Dataset Curators\n\nFrom the paper: \"In order to estimate human performance, and to construct a high-quality evaluation with fewer ambiguous examples, we have five verifiers provide answers for each question in the development and test sets.\" The first author has been manually checking the annotations throughout the entire data collection process that took ~7 months.",
"### Licensing Information\n\nlicense: apache-2.0"
] |
7249f98b8f4c45f81cd81b7bb91b1aac8161d693 | hgnghnhfgh | fuxijun/ccc | [
"region:us"
] | 2022-11-08T06:52:23+00:00 | {} | 2022-11-17T07:06:55+00:00 | [] | [] | TAGS
#region-us
| hgnghnhfgh | [] | [
"TAGS\n#region-us \n"
] |
92b053991b1742eaa198212617eed2abd572e0f3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__random-en-30c46b-2023566786 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-08T08:17:02+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/random"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-13b", "metrics": [], "dataset_name": "futin/random", "dataset_config": "en", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-08T12:21:56+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-13b
* Dataset: futin/random
* Config: en
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/random\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-13b\n* Dataset: futin/random\n* Config: en\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
b9cd95a557cc71a144179dfbc97b9603382e1cfa | # Dataset Card for "laion2B-fa-images"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | amir7d0/laion2B-fa-images | [
"region:us"
] | 2022-11-08T08:49:53+00:00 | {"dataset_info": {"features": [{"name": "SAMPLE_ID", "dtype": "int64"}, {"name": "TEXT", "dtype": "string"}, {"name": "URL", "dtype": "string"}, {"name": "IMAGE_PATH", "dtype": "string"}, {"name": "IMAGE", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 21488547.0, "num_examples": 1000}], "download_size": 21283656, "dataset_size": 21488547.0}} | 2022-11-09T16:36:43+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "laion2B-fa-images"
More Information needed | [
"# Dataset Card for \"laion2B-fa-images\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"laion2B-fa-images\"\n\nMore Information needed"
] |
3ac5d43d148f74d080320b6b27d841a712f87cbc |
This is a dataset which contains the docs from all the PRs updating one of the docs from https://huggingface.co/docs.
It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo. | hf-doc-build/doc-build-dev | [
"license:mit",
"documentation",
"region:us"
] | 2022-11-08T09:03:37+00:00 | {"license": "mit", "pretty_name": "HF Documentation (PRs)", "tags": ["documentation"]} | 2024-02-17T17:44:01+00:00 | [] | [] | TAGS
#license-mit #documentation #region-us
|
This is a dataset which contains the docs from all the PRs updating one of the docs from URL
It is automatically updated by this github action from the doc-buider repo. | [] | [
"TAGS\n#license-mit #documentation #region-us \n"
] |
45fb5843a8fc3fde3028a623d7afb8d3e8f42007 | # Dataset Card for "petitions-ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | eminecg/petitions-ds | [
"region:us"
] | 2022-11-08T09:15:48+00:00 | {"dataset_info": {"features": [{"name": "petition", "dtype": "string"}, {"name": "petition_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 29426840.1, "num_examples": 2475}, {"name": "validation", "num_bytes": 3269648.9, "num_examples": 275}], "download_size": 14382239, "dataset_size": 32696489.0}} | 2022-11-08T09:28:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "petitions-ds"
More Information needed | [
"# Dataset Card for \"petitions-ds\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"petitions-ds\"\n\nMore Information needed"
] |
546126dd7206964952182cc541052f1649e78525 | # Dataset Card for "test_push3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_push3 | [
"region:us"
] | 2022-11-08T09:20:41+00:00 | {"dataset_info": {"features": [{"name": "x", "dtype": "int64"}, {"name": "y", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 46, "num_examples": 3}, {"name": "train", "num_bytes": 116, "num_examples": 8}], "download_size": 1698, "dataset_size": 162}} | 2022-11-08T09:21:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_push3"
More Information needed | [
"# Dataset Card for \"test_push3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_push3\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.