sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
| tokens_length
sequencelengths 1
353
| input_texts
sequencelengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0bfc5269714a8861f29c1253bf89e6465eae8ab9 |
# Dataset Card for ScandiQA
## Dataset Description
- **Repository:** <https://github.com/alexandrainst/scandi-qa>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Size of downloaded dataset files:** 69 MB
- **Size of the generated dataset:** 67 MB
- **Total amount of disk used:** 136 MB
### Dataset Summary
ScandiQA is a dataset of questions and answers in the Danish, Norwegian, and Swedish
languages. All samples come from the Natural Questions (NQ) dataset, which is a large
question answering dataset from Google searches. The Scandinavian questions and answers
come from the MKQA dataset, where 10,000 NQ samples were manually translated into,
among others, Danish, Norwegian, and Swedish. However, this did not include a
translated context, hindering the training of extractive question answering models.
We merged the NQ dataset with the MKQA dataset, and extracted contexts as either "long
answers" from the NQ dataset, being the paragraph in which the answer was found, or
otherwise we extract the context by locating the paragraphs which have the largest
cosine similarity to the question, and which contains the desired answer.
Further, many answers in the MKQA dataset were "language normalised": for instance, all
date answers were converted to the format "YYYY-MM-DD", meaning that in most cases
these answers are not appearing in any paragraphs. We solve this by extending the MKQA
answers with plausible "answer candidates", being slight perturbations or translations
of the answer.
With the contexts extracted, we translated these to Danish, Swedish and Norwegian using
the [DeepL translation service](https://www.deepl.com/pro-api?cta=header-pro-api) for
Danish and Swedish, and the [Google Translation
service](https://cloud.google.com/translate/docs/reference/rest/) for Norwegian. After
translation we ensured that the Scandinavian answers do indeed occur in the translated
contexts.
As we are filtering the MKQA samples at both the "merging stage" and the "translation
stage", we are not able to fully convert the 10,000 samples to the Scandinavian
languages, and instead get roughly 8,000 samples per language. These have further been
split into a training, validation and test split, with the latter two containing
roughly 750 samples. The splits have been created in such a way that the proportion of
samples without an answer is roughly the same in each split.
### Supported Tasks and Leaderboards
Training machine learning models for extractive question answering is the intended task
for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish (`da`), Swedish (`sv`) and Norwegian (`no`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 69 MB
- **Size of the generated dataset:** 67 MB
- **Total amount of disk used:** 136 MB
An example from the `train` split of the `da` subset looks as follows.
```
{
'example_id': 123,
'question': 'Er dette en test?',
'answer': 'Dette er en test',
'answer_start': 0,
'context': 'Dette er en testkontekst.',
'answer_en': 'This is a test',
'answer_start_en': 0,
'context_en': "This is a test context.",
'title_en': 'Train test'
}
```
### Data Fields
The data fields are the same among all splits.
- `example_id`: an `int64` feature.
- `question`: a `string` feature.
- `answer`: a `string` feature.
- `answer_start`: an `int64` feature.
- `context`: a `string` feature.
- `answer_en`: a `string` feature.
- `answer_start_en`: an `int64` feature.
- `context_en`: a `string` feature.
- `title_en`: a `string` feature.
### Data Splits
| name | train | validation | test |
|----------|------:|-----------:|-----:|
| da | 6311 | 749 | 750 |
| sv | 6299 | 750 | 749 |
| no | 6314 | 749 | 750 |
## Dataset Creation
### Curation Rationale
The Scandinavian languages does not have any gold standard question answering dataset.
This is not quite gold standard, but the fact both the questions and answers are all
manually translated, it is a solid silver standard dataset.
### Source Data
The original data was collected from the [MKQA](https://github.com/apple/ml-mkqa/) and
[Natural Questions](https://ai.google.com/research/NaturalQuestions) datasets from
Apple and Google, respectively.
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/).
| alexandrainst/scandi-qa | [
"task_categories:question-answering",
"task_ids:extractive-qa",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:mkqa",
"source_datasets:natural_questions",
"language:da",
"language:sv",
"language:no",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-08-30T08:46:59+00:00 | {"language": ["da", "sv", false], "license": ["cc-by-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["mkqa", "natural_questions"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"], "pretty_name": "ScandiQA"} | 2023-01-16T13:51:25+00:00 | [] | [
"da",
"sv",
"no"
] | TAGS
#task_categories-question-answering #task_ids-extractive-qa #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-mkqa #source_datasets-natural_questions #language-Danish #language-Swedish #language-Norwegian #license-cc-by-sa-4.0 #region-us
| Dataset Card for ScandiQA
=========================
Dataset Description
-------------------
* Repository: <URL
* Point of Contact: Dan Saattrup Nielsen
* Size of downloaded dataset files: 69 MB
* Size of the generated dataset: 67 MB
* Total amount of disk used: 136 MB
### Dataset Summary
ScandiQA is a dataset of questions and answers in the Danish, Norwegian, and Swedish
languages. All samples come from the Natural Questions (NQ) dataset, which is a large
question answering dataset from Google searches. The Scandinavian questions and answers
come from the MKQA dataset, where 10,000 NQ samples were manually translated into,
among others, Danish, Norwegian, and Swedish. However, this did not include a
translated context, hindering the training of extractive question answering models.
We merged the NQ dataset with the MKQA dataset, and extracted contexts as either "long
answers" from the NQ dataset, being the paragraph in which the answer was found, or
otherwise we extract the context by locating the paragraphs which have the largest
cosine similarity to the question, and which contains the desired answer.
Further, many answers in the MKQA dataset were "language normalised": for instance, all
date answers were converted to the format "YYYY-MM-DD", meaning that in most cases
these answers are not appearing in any paragraphs. We solve this by extending the MKQA
answers with plausible "answer candidates", being slight perturbations or translations
of the answer.
With the contexts extracted, we translated these to Danish, Swedish and Norwegian using
the DeepL translation service for
Danish and Swedish, and the Google Translation
service for Norwegian. After
translation we ensured that the Scandinavian answers do indeed occur in the translated
contexts.
As we are filtering the MKQA samples at both the "merging stage" and the "translation
stage", we are not able to fully convert the 10,000 samples to the Scandinavian
languages, and instead get roughly 8,000 samples per language. These have further been
split into a training, validation and test split, with the latter two containing
roughly 750 samples. The splits have been created in such a way that the proportion of
samples without an answer is roughly the same in each split.
### Supported Tasks and Leaderboards
Training machine learning models for extractive question answering is the intended task
for this dataset. No leaderboard is active at this point.
### Languages
The dataset is available in Danish ('da'), Swedish ('sv') and Norwegian ('no').
Dataset Structure
-----------------
### Data Instances
* Size of downloaded dataset files: 69 MB
* Size of the generated dataset: 67 MB
* Total amount of disk used: 136 MB
An example from the 'train' split of the 'da' subset looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'example\_id': an 'int64' feature.
* 'question': a 'string' feature.
* 'answer': a 'string' feature.
* 'answer\_start': an 'int64' feature.
* 'context': a 'string' feature.
* 'answer\_en': a 'string' feature.
* 'answer\_start\_en': an 'int64' feature.
* 'context\_en': a 'string' feature.
* 'title\_en': a 'string' feature.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
The Scandinavian languages does not have any gold standard question answering dataset.
This is not quite gold standard, but the fact both the questions and answers are all
manually translated, it is a solid silver standard dataset.
### Source Data
The original data was collected from the MKQA and
Natural Questions datasets from
Apple and Google, respectively.
Additional Information
----------------------
### Dataset Curators
Dan Saattrup Nielsen from the The Alexandra
Institute curated this dataset.
### Licensing Information
The dataset is licensed under the CC BY-SA 4.0
license.
| [
"### Dataset Summary\n\n\nScandiQA is a dataset of questions and answers in the Danish, Norwegian, and Swedish\nlanguages. All samples come from the Natural Questions (NQ) dataset, which is a large\nquestion answering dataset from Google searches. The Scandinavian questions and answers\ncome from the MKQA dataset, where 10,000 NQ samples were manually translated into,\namong others, Danish, Norwegian, and Swedish. However, this did not include a\ntranslated context, hindering the training of extractive question answering models.\n\n\nWe merged the NQ dataset with the MKQA dataset, and extracted contexts as either \"long\nanswers\" from the NQ dataset, being the paragraph in which the answer was found, or\notherwise we extract the context by locating the paragraphs which have the largest\ncosine similarity to the question, and which contains the desired answer.\n\n\nFurther, many answers in the MKQA dataset were \"language normalised\": for instance, all\ndate answers were converted to the format \"YYYY-MM-DD\", meaning that in most cases\nthese answers are not appearing in any paragraphs. We solve this by extending the MKQA\nanswers with plausible \"answer candidates\", being slight perturbations or translations\nof the answer.\n\n\nWith the contexts extracted, we translated these to Danish, Swedish and Norwegian using\nthe DeepL translation service for\nDanish and Swedish, and the Google Translation\nservice for Norwegian. After\ntranslation we ensured that the Scandinavian answers do indeed occur in the translated\ncontexts.\n\n\nAs we are filtering the MKQA samples at both the \"merging stage\" and the \"translation\nstage\", we are not able to fully convert the 10,000 samples to the Scandinavian\nlanguages, and instead get roughly 8,000 samples per language. These have further been\nsplit into a training, validation and test split, with the latter two containing\nroughly 750 samples. The splits have been created in such a way that the proportion of\nsamples without an answer is roughly the same in each split.",
"### Supported Tasks and Leaderboards\n\n\nTraining machine learning models for extractive question answering is the intended task\nfor this dataset. No leaderboard is active at this point.",
"### Languages\n\n\nThe dataset is available in Danish ('da'), Swedish ('sv') and Norwegian ('no').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of downloaded dataset files: 69 MB\n* Size of the generated dataset: 67 MB\n* Total amount of disk used: 136 MB\n\n\nAn example from the 'train' split of the 'da' subset looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'example\\_id': an 'int64' feature.\n* 'question': a 'string' feature.\n* 'answer': a 'string' feature.\n* 'answer\\_start': an 'int64' feature.\n* 'context': a 'string' feature.\n* 'answer\\_en': a 'string' feature.\n* 'answer\\_start\\_en': an 'int64' feature.\n* 'context\\_en': a 'string' feature.\n* 'title\\_en': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe Scandinavian languages does not have any gold standard question answering dataset.\nThis is not quite gold standard, but the fact both the questions and answers are all\nmanually translated, it is a solid silver standard dataset.",
"### Source Data\n\n\nThe original data was collected from the MKQA and\nNatural Questions datasets from\nApple and Google, respectively.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDan Saattrup Nielsen from the The Alexandra\nInstitute curated this dataset.",
"### Licensing Information\n\n\nThe dataset is licensed under the CC BY-SA 4.0\nlicense."
] | [
"TAGS\n#task_categories-question-answering #task_ids-extractive-qa #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-mkqa #source_datasets-natural_questions #language-Danish #language-Swedish #language-Norwegian #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nScandiQA is a dataset of questions and answers in the Danish, Norwegian, and Swedish\nlanguages. All samples come from the Natural Questions (NQ) dataset, which is a large\nquestion answering dataset from Google searches. The Scandinavian questions and answers\ncome from the MKQA dataset, where 10,000 NQ samples were manually translated into,\namong others, Danish, Norwegian, and Swedish. However, this did not include a\ntranslated context, hindering the training of extractive question answering models.\n\n\nWe merged the NQ dataset with the MKQA dataset, and extracted contexts as either \"long\nanswers\" from the NQ dataset, being the paragraph in which the answer was found, or\notherwise we extract the context by locating the paragraphs which have the largest\ncosine similarity to the question, and which contains the desired answer.\n\n\nFurther, many answers in the MKQA dataset were \"language normalised\": for instance, all\ndate answers were converted to the format \"YYYY-MM-DD\", meaning that in most cases\nthese answers are not appearing in any paragraphs. We solve this by extending the MKQA\nanswers with plausible \"answer candidates\", being slight perturbations or translations\nof the answer.\n\n\nWith the contexts extracted, we translated these to Danish, Swedish and Norwegian using\nthe DeepL translation service for\nDanish and Swedish, and the Google Translation\nservice for Norwegian. After\ntranslation we ensured that the Scandinavian answers do indeed occur in the translated\ncontexts.\n\n\nAs we are filtering the MKQA samples at both the \"merging stage\" and the \"translation\nstage\", we are not able to fully convert the 10,000 samples to the Scandinavian\nlanguages, and instead get roughly 8,000 samples per language. These have further been\nsplit into a training, validation and test split, with the latter two containing\nroughly 750 samples. The splits have been created in such a way that the proportion of\nsamples without an answer is roughly the same in each split.",
"### Supported Tasks and Leaderboards\n\n\nTraining machine learning models for extractive question answering is the intended task\nfor this dataset. No leaderboard is active at this point.",
"### Languages\n\n\nThe dataset is available in Danish ('da'), Swedish ('sv') and Norwegian ('no').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of downloaded dataset files: 69 MB\n* Size of the generated dataset: 67 MB\n* Total amount of disk used: 136 MB\n\n\nAn example from the 'train' split of the 'da' subset looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'example\\_id': an 'int64' feature.\n* 'question': a 'string' feature.\n* 'answer': a 'string' feature.\n* 'answer\\_start': an 'int64' feature.\n* 'context': a 'string' feature.\n* 'answer\\_en': a 'string' feature.\n* 'answer\\_start\\_en': an 'int64' feature.\n* 'context\\_en': a 'string' feature.\n* 'title\\_en': a 'string' feature.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nThe Scandinavian languages does not have any gold standard question answering dataset.\nThis is not quite gold standard, but the fact both the questions and answers are all\nmanually translated, it is a solid silver standard dataset.",
"### Source Data\n\n\nThe original data was collected from the MKQA and\nNatural Questions datasets from\nApple and Google, respectively.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDan Saattrup Nielsen from the The Alexandra\nInstitute curated this dataset.",
"### Licensing Information\n\n\nThe dataset is licensed under the CC BY-SA 4.0\nlicense."
] | [
97,
458,
38,
36,
58,
149,
11,
57,
36,
22,
21
] | [
"passage: TAGS\n#task_categories-question-answering #task_ids-extractive-qa #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-mkqa #source_datasets-natural_questions #language-Danish #language-Swedish #language-Norwegian #license-cc-by-sa-4.0 #region-us \n"
] |
2c1e9e1a4deba071907e637095df2467c0c29472 | # Dataset Card for Auditor_Review
This file is a copy, the original version is hosted at [data.world](https://data.world/rshah/diabetes) | demo-org/diabetes | [
"task_categories:text-classification",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"region:us"
] | 2022-08-30T20:06:15+00:00 | {"language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "pretty_name": "Diabetes"} | 2022-08-30T20:08:59+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #region-us
| # Dataset Card for Auditor_Review
This file is a copy, the original version is hosted at URL | [
"# Dataset Card for Auditor_Review\n\nThis file is a copy, the original version is hosted at URL"
] | [
"TAGS\n#task_categories-text-classification #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #region-us \n",
"# Dataset Card for Auditor_Review\n\nThis file is a copy, the original version is hosted at URL"
] | [
49,
23
] | [
"passage: TAGS\n#task_categories-text-classification #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #region-us \n# Dataset Card for Auditor_Review\n\nThis file is a copy, the original version is hosted at URL"
] |
03a3c90f11ff6485cd4955a23f0a6e07b5158936 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-billsum-default-dd3eba-14585981 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-30T23:24:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-08-31T06:44:21+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
103,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-base-16384-booksum-V11-big_patent-V2\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
c2248d5acd8782d3046775ac52db8eb3dad50305 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-billsum-default-3fec5f-14625986 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-30T23:51:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-09-01T09:02:44+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
102,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
f4f32ebb0db7da41e075f69405e7e396dd93d2d0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-billsum-default-3fec5f-14625985 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-30T23:51:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-09-01T03:09:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
101,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
7977d7e4d2c8bd3f9da965a99d6057387f58875a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-xsum-default-6f5db0-14615984 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-30T23:51:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-09-01T12:24:17+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
102,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
0cd9acdb0ea6acb0442697499b54a323105dc95d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-billsum-default-3fec5f-14625987 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-30T23:52:00+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-09-01T07:04:11+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
102,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
d51d497dbd52f384789619ba69627cd55541ecd9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-f593d1-14645991 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-30T23:52:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-08-31T00:18:28+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
103,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP11\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
22b0f359dc343c3842ae0b3b25410185a06dc368 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-f593d1-14645992 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-30T23:52:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "test", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-08-31T00:33:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13
* Dataset: samsum
* Config: samsum
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
103,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13\n* Dataset: samsum\n* Config: samsum\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
626afad55214c9e1949031f8a19c13834f5b817f |
# Dataset Card for pixta-ai/Plane-images-in-multiple-scenes
## Dataset Description
- **Homepage:** https://www.pixta.ai/
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
4,000 Plane images in multiple scenes, including multiple types of planes disproportionately, the passenger plan are the majorities.
Each image contains from 1 to 10 visible planes
For more details, please refer to the link: https://www.pixta.ai/
Or send your inquiries to [email protected]
### Supported Tasks and Leaderboards
object-detection, computer-vision: The dataset can be used to train or enhance model for object detection.
### Languages
English
### License
Academic & commercial usage | pixta-ai/Plane-images-in-multiple-scenes | [
"region:us"
] | 2022-08-31T01:43:12+00:00 | {"YAML tags": [{"copy-paste the tags obtained with the tagging app": "https://github.com/huggingface/datasets-tagging"}]} | 2022-09-05T03:23:05+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for pixta-ai/Plane-images-in-multiple-scenes
## Dataset Description
- Homepage: URL
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
4,000 Plane images in multiple scenes, including multiple types of planes disproportionately, the passenger plan are the majorities.
Each image contains from 1 to 10 visible planes
For more details, please refer to the link: URL
Or send your inquiries to contact@URL
### Supported Tasks and Leaderboards
object-detection, computer-vision: The dataset can be used to train or enhance model for object detection.
### Languages
English
### License
Academic & commercial usage | [
"# Dataset Card for pixta-ai/Plane-images-in-multiple-scenes",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\n4,000 Plane images in multiple scenes, including multiple types of planes disproportionately, the passenger plan are the majorities.\nEach image contains from 1 to 10 visible planes\n \nFor more details, please refer to the link: URL\nOr send your inquiries to contact@URL",
"### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train or enhance model for object detection.",
"### Languages\nEnglish",
"### License\nAcademic & commercial usage"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for pixta-ai/Plane-images-in-multiple-scenes",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\n4,000 Plane images in multiple scenes, including multiple types of planes disproportionately, the passenger plan are the majorities.\nEach image contains from 1 to 10 visible planes\n \nFor more details, please refer to the link: URL\nOr send your inquiries to contact@URL",
"### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train or enhance model for object detection.",
"### Languages\nEnglish",
"### License\nAcademic & commercial usage"
] | [
6,
23,
25,
67,
35,
5,
7
] | [
"passage: TAGS\n#region-us \n# Dataset Card for pixta-ai/Plane-images-in-multiple-scenes## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\n4,000 Plane images in multiple scenes, including multiple types of planes disproportionately, the passenger plan are the majorities.\nEach image contains from 1 to 10 visible planes\n \nFor more details, please refer to the link: URL\nOr send your inquiries to contact@URL### Supported Tasks and Leaderboards\n\nobject-detection, computer-vision: The dataset can be used to train or enhance model for object detection.### Languages\nEnglish### License\nAcademic & commercial usage"
] |
44488c9a08a774143dca37c60c28116c766e48fd | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: mrp/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@saminaminaeheh](https://huggingface.co/saminaminaeheh) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad-plain_text-d52fee-14655993 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T05:42:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "mrp/bert-finetuned-squad", "metrics": ["bleu", "rouge"], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T05:45:10+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: mrp/bert-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @saminaminaeheh for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mrp/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @saminaminaeheh for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mrp/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @saminaminaeheh for evaluating this model."
] | [
13,
89,
19
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: mrp/bert-finetuned-squad\n* Dataset: squad\n* Config: plain_text\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @saminaminaeheh for evaluating this model."
] |
a8b2fb9790419752e26300ce37c9eabc36411bd4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: sgugger/glue-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-mrpc-e15d1b-14665994 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T06:30:52+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "sgugger/glue-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}} | 2022-08-31T06:31:19+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: sgugger/glue-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: sgugger/glue-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: sgugger/glue-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
88,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: sgugger/glue-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
1a3a5ca04db7486f9737e64f16c54c1d2b48fba4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: Intel/camembert-base-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-mrpc-e15d1b-14665997 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T06:33:16+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "Intel/camembert-base-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}} | 2022-08-31T06:33:42+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: Intel/camembert-base-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/camembert-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/camembert-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
89,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Intel/camembert-base-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
55730ed50204cd1be2d9f3d0f828b34a762f6ae9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: sgugger/bert-finetuned-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-mrpc-e15d1b-14666001 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T06:36:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "sgugger/bert-finetuned-mrpc", "metrics": [], "dataset_name": "glue", "dataset_config": "mrpc", "dataset_split": "validation", "col_mapping": {"text1": "sentence1", "text2": "sentence2", "target": "label"}}} | 2022-08-31T06:36:29+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: sgugger/bert-finetuned-mrpc
* Dataset: glue
* Config: mrpc
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: sgugger/bert-finetuned-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: sgugger/bert-finetuned-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
91,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: sgugger/bert-finetuned-mrpc\n* Dataset: glue\n* Config: mrpc\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
09a1805befbcdb794978a12558e99ea3d8dd2cb1 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: Alireza1044/mobilebert_qqp
* Dataset: glue
* Config: qqp
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-qqp-c973af-14676003 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T06:36:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "Alireza1044/mobilebert_qqp", "metrics": [], "dataset_name": "glue", "dataset_config": "qqp", "dataset_split": "validation", "col_mapping": {"text1": "question1", "text2": "question2", "target": "label"}}} | 2022-08-31T06:38:38+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: Alireza1044/mobilebert_qqp
* Dataset: glue
* Config: qqp
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Alireza1044/mobilebert_qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Alireza1044/mobilebert_qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
89,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Alireza1044/mobilebert_qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
0434b76db92af9825be658211a80b3ce2fcb41ba | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: gchhablani/bert-base-cased-finetuned-qqp
* Dataset: glue
* Config: qqp
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-qqp-c973af-14676011 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T06:40:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "gchhablani/bert-base-cased-finetuned-qqp", "metrics": [], "dataset_name": "glue", "dataset_config": "qqp", "dataset_split": "validation", "col_mapping": {"text1": "question1", "text2": "question2", "target": "label"}}} | 2022-08-31T06:43:33+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: gchhablani/bert-base-cased-finetuned-qqp
* Dataset: glue
* Config: qqp
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: gchhablani/bert-base-cased-finetuned-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: gchhablani/bert-base-cased-finetuned-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
98,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: gchhablani/bert-base-cased-finetuned-qqp\n* Dataset: glue\n* Config: qqp\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
0b092afb93ac87046ff0da854e0f025408b23915 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: Alireza1044/mobilebert_mnli
* Dataset: glue
* Config: mnli
* Split: validation_matched
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-mnli-026a6e-14686015 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T06:44:05+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "Alireza1044/mobilebert_mnli", "metrics": [], "dataset_name": "glue", "dataset_config": "mnli", "dataset_split": "validation_matched", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}} | 2022-08-31T06:44:58+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: Alireza1044/mobilebert_mnli
* Dataset: glue
* Config: mnli
* Split: validation_matched
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Alireza1044/mobilebert_mnli\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Alireza1044/mobilebert_mnli\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
92,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Alireza1044/mobilebert_mnli\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
8d30d6afd086cb75a9a24e114001dcbadd64c5b4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: Jiva/xlm-roberta-large-it-mnli
* Dataset: glue
* Config: mnli
* Split: validation_matched
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-mnli-026a6e-14686017 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T06:45:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "Jiva/xlm-roberta-large-it-mnli", "metrics": [], "dataset_name": "glue", "dataset_config": "mnli", "dataset_split": "validation_matched", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}} | 2022-08-31T06:48:14+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: Jiva/xlm-roberta-large-it-mnli
* Dataset: glue
* Config: mnli
* Split: validation_matched
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Jiva/xlm-roberta-large-it-mnli\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Jiva/xlm-roberta-large-it-mnli\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
99,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: Jiva/xlm-roberta-large-it-mnli\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
b55cb6fad539ade72ccb0bf50f7cc661dc764116 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: nbhimte/tiny-bert-mnli-distilled
* Dataset: glue
* Config: mnli
* Split: validation_matched
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-mnli-026a6e-14686020 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T06:50:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "nbhimte/tiny-bert-mnli-distilled", "metrics": [], "dataset_name": "glue", "dataset_config": "mnli", "dataset_split": "validation_matched", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}} | 2022-08-31T06:51:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: nbhimte/tiny-bert-mnli-distilled
* Dataset: glue
* Config: mnli
* Split: validation_matched
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: nbhimte/tiny-bert-mnli-distilled\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: nbhimte/tiny-bert-mnli-distilled\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
98,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: nbhimte/tiny-bert-mnli-distilled\n* Dataset: glue\n* Config: mnli\n* Split: validation_matched\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
2207c72eb1dfd42516e8bb8e8e428a1f15fc0f9e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/roberta-base-qnli
* Dataset: glue
* Config: qnli
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-qnli-1747ab-14696022 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T06:53:03+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "JeremiahZ/roberta-base-qnli", "metrics": [], "dataset_name": "glue", "dataset_config": "qnli", "dataset_split": "validation", "col_mapping": {"text1": "question", "text2": "sentence", "target": "label"}}} | 2022-08-31T06:53:56+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: JeremiahZ/roberta-base-qnli
* Dataset: glue
* Config: qnli
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-qnli\n* Dataset: glue\n* Config: qnli\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-qnli\n* Dataset: glue\n* Config: qnli\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
92,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: JeremiahZ/roberta-base-qnli\n* Dataset: glue\n* Config: qnli\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
3efd19530c8c048329718b508bd997f08a1066ff |
This repository contains ShapeNetCore (v2) in [GLTF](https://en.wikipedia.org/wiki/GlTF) format, a subset of [ShapeNet](https://shapenet.org).
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in [WordNet 3.0](https://wordnet.princeton.edu/).
If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report.
```
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
```
For more information, please contact us at [email protected] and indicate ShapeNetCore v2 in the title of your email.
| ShapeNet/shapenetcore-gltf | [
"language:en",
"license:other",
"3D shapes",
"region:us"
] | 2022-08-31T07:04:32+00:00 | {"language": ["en"], "license": "other", "pretty_name": "ShapeNetCore", "tags": ["3D shapes"], "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_prompt": "To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the **school or company** that you are affiliated with (the **Affiliation** field). After requesting access to this ShapeNet repo, you will be considered for access approval. \n\nAfter access approval, you (the \"Researcher\") receive permission to use the ShapeNet database (the \"Database\") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions: Researcher shall use the Database only for non-commercial research and educational purposes. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. The law of the State of New Jersey shall apply to all disputes under this agreement.\n\nFor access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with. Please actually fill out the fields (DO NOT put the word \"Advisor\" for PI/Advisor and the word \"School\" for \"Affiliation\", please specify the name of your advisor and the name of your school).", "extra_gated_fields": {"Name": "text", "PI/Advisor": "text", "Affiliation": "text", "Purpose": "text", "Country": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}} | 2023-09-20T14:03:13+00:00 | [] | [
"en"
] | TAGS
#language-English #license-other #3D shapes #region-us
|
This repository contains ShapeNetCore (v2) in GLTF format, a subset of ShapeNet.
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in WordNet 3.0.
If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report.
For more information, please contact us at shapenetwebmaster@URL and indicate ShapeNetCore v2 in the title of your email.
| [] | [
"TAGS\n#language-English #license-other #3D shapes #region-us \n"
] | [
19
] | [
"passage: TAGS\n#language-English #license-other #3D shapes #region-us \n"
] |
75b1f32f2ebf11639ee2e1f0df219a0b9bcd1ef6 |
This repository contains ShapeNetCore (v2) in [GLB](https://en.wikipedia.org/wiki/GlTF#GLB) format, a subset of [ShapeNet](https://shapenet.org).
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in [WordNet 3.0](https://wordnet.princeton.edu/).
If you use ShapeNet data, you agree to abide by the [ShapeNet terms of use](https://shapenet.org/terms). You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report.
```
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
```
For more information, please contact us at [email protected] and indicate ShapeNetCore v2 in the title of your email.
| ShapeNet/shapenetcore-glb | [
"language:en",
"license:other",
"3D shapes",
"region:us"
] | 2022-08-31T07:04:51+00:00 | {"language": ["en"], "license": "other", "pretty_name": "ShapeNetCore", "tags": ["3D shapes"], "extra_gated_heading": "Acknowledge license to accept the repository", "extra_gated_prompt": "To request access to this ShapeNet repo, you will need to provide your **full name** (please provide both your first and last name), the name of your **advisor or the principal investigator (PI)** of your lab (in the PI/Advisor) fields, and the **school or company** that you are affiliated with (the **Affiliation** field). After requesting access to this ShapeNet repo, you will be considered for access approval. \n\nAfter access approval, you (the \"Researcher\") receive permission to use the ShapeNet database (the \"Database\") at Princeton University and Stanford University. In exchange for being able to join the ShapeNet community and receive such permission, Researcher hereby agrees to the following terms and conditions: Researcher shall use the Database only for non-commercial research and educational purposes. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted 3D models that he or she may create from the Database. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer. The law of the State of New Jersey shall apply to all disputes under this agreement.\n\nFor access to the data, please fill in your **full name** (both first and last name), the name of your **advisor or principal investigator (PI)**, and the name of the **school or company** you are affliated with. Please actually fill out the fields (DO NOT put the word \"Advisor\" for PI/Advisor and the word \"School\" for \"Affiliation\", please specify the name of your advisor and the name of your school).", "extra_gated_fields": {"Name": "text", "PI/Advisor": "text", "Affiliation": "text", "Purpose": "text", "Country": "text", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}} | 2023-09-20T14:04:40+00:00 | [] | [
"en"
] | TAGS
#language-English #license-other #3D shapes #region-us
|
This repository contains ShapeNetCore (v2) in GLB format, a subset of ShapeNet.
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore are linked to an appropriate synset in WordNet 3.0.
If you use ShapeNet data, you agree to abide by the ShapeNet terms of use. You are only allowed to redistribute the data to your research associates and colleagues provided that they first agree to be bound by these terms and conditions.
If you use this data, please cite the main ShapeNet technical report.
For more information, please contact us at shapenetwebmaster@URL and indicate ShapeNetCore v2 in the title of your email.
| [] | [
"TAGS\n#language-English #license-other #3D shapes #region-us \n"
] | [
19
] | [
"passage: TAGS\n#language-English #license-other #3D shapes #region-us \n"
] |
7bcd8d67060c921ea89a52433ce80e7dc753784c | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-a741994f-efcd-40c8-8652-be4f42ba26cd-31 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T07:09:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "autoevaluate/multi-class-classification", "metrics": ["matthews_correlation"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-08-31T07:10:00+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: autoevaluate/multi-class-classification
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
87,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: autoevaluate/multi-class-classification\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
9ad5d61faaa69bf55d889259015496b6d39ea90a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: gchhablani/bert-base-cased-finetuned-qnli
* Dataset: glue
* Config: qnli
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-qnli-1747ab-14696030 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T07:09:23+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "natural_language_inference", "model": "gchhablani/bert-base-cased-finetuned-qnli", "metrics": [], "dataset_name": "glue", "dataset_config": "qnli", "dataset_split": "validation", "col_mapping": {"text1": "question", "text2": "sentence", "target": "label"}}} | 2022-08-31T07:10:09+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: gchhablani/bert-base-cased-finetuned-qnli
* Dataset: glue
* Config: qnli
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: gchhablani/bert-base-cased-finetuned-qnli\n* Dataset: glue\n* Config: qnli\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: gchhablani/bert-base-cased-finetuned-qnli\n* Dataset: glue\n* Config: qnli\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
13,
99,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: gchhablani/bert-base-cased-finetuned-qnli\n* Dataset: glue\n* Config: qnli\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
cc04efc6edd44fc890b7625b82e36e023a353c59 |
# Dataset Card for SANAD
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:https://data.mendeley.com/datasets/57zpx667y9/2**
### Dataset Summary
SANAD Dataset is a large collection of Arabic news articles that can be used in different Arabic NLP tasks such as Text Classification and Word Embedding. The articles were collected using Python scripts written specifically for three popular news websites: AlKhaleej, AlArabiya and Akhbarona. All datasets have seven categories [Culture, Finance, Medical, Politics, Religion, Sports and Tech], except AlArabiya which doesn’t have [Religion]. SANAD contains a total number of 190k+ articles.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
license: cc-by-4.0
### Citation Information
```
@article{einea2019sanad,
title={Sanad: Single-label arabic news articles dataset for automatic text categorization},
author={Einea, Omar and Elnagar, Ashraf and Al Debsi, Ridhwan},
journal={Data in brief},
volume={25},
pages={104076},
year={2019},
publisher={Elsevier}
}
```
### Contributions
| khalidalt/SANAD | [
"license:cc-by-4.0",
"region:us"
] | 2022-08-31T12:34:53+00:00 | {"license": "cc-by-4.0"} | 2022-09-03T18:36:00+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
# Dataset Card for SANAD
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:URL
### Dataset Summary
SANAD Dataset is a large collection of Arabic news articles that can be used in different Arabic NLP tasks such as Text Classification and Word Embedding. The articles were collected using Python scripts written specifically for three popular news websites: AlKhaleej, AlArabiya and Akhbarona. All datasets have seven categories [Culture, Finance, Medical, Politics, Religion, Sports and Tech], except AlArabiya which doesn’t have [Religion]. SANAD contains a total number of 190k+ articles.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
license: cc-by-4.0
### Contributions
| [
"# Dataset Card for SANAD",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:URL",
"### Dataset Summary\n\nSANAD Dataset is a large collection of Arabic news articles that can be used in different Arabic NLP tasks such as Text Classification and Word Embedding. The articles were collected using Python scripts written specifically for three popular news websites: AlKhaleej, AlArabiya and Akhbarona. All datasets have seven categories [Culture, Finance, Medical, Politics, Religion, Sports and Tech], except AlArabiya which doesn’t have [Religion]. SANAD contains a total number of 190k+ articles.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nlicense: cc-by-4.0",
"### Contributions"
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# Dataset Card for SANAD",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:URL",
"### Dataset Summary\n\nSANAD Dataset is a large collection of Arabic news articles that can be used in different Arabic NLP tasks such as Text Classification and Word Embedding. The articles were collected using Python scripts written specifically for three popular news websites: AlKhaleej, AlArabiya and Akhbarona. All datasets have seven categories [Culture, Finance, Medical, Politics, Religion, Sports and Tech], except AlArabiya which doesn’t have [Religion]. SANAD contains a total number of 190k+ articles.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nlicense: cc-by-4.0",
"### Contributions"
] | [
15,
7,
125,
8,
126,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
14,
5
] | [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n# Dataset Card for SANAD## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:URL### Dataset Summary\n\nSANAD Dataset is a large collection of Arabic news articles that can be used in different Arabic NLP tasks such as Text Classification and Word Embedding. The articles were collected using Python scripts written specifically for three popular news websites: AlKhaleej, AlArabiya and Akhbarona. All datasets have seven categories [Culture, Finance, Medical, Politics, Religion, Sports and Tech], except AlArabiya which doesn’t have [Religion]. SANAD contains a total number of 190k+ articles.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information\n\nlicense: cc-by-4.0### Contributions"
] |
78d2052bec6926a380c29fafca8557bced46ad43 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinyroberta-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906065 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:49:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinyroberta-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T20:51:46+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinyroberta-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinyroberta-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinyroberta-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
93,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinyroberta-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
ce4204c2bd9b8eb2d0872b9b0ea63f0200030771 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906066 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:49:15+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T20:52:06+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
94,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
101220450c4e9337566488a595372390246937c9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906067 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:49:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-large-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T20:53:49+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
95,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
72b9520267fa0633669d76cdf4968d6c25521b96 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906068 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:52:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-base-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T20:55:28+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
97,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
cca945ceb6b114937af9e69853666dc3d12ef1c0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906069 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:52:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-large-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T20:57:53+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
98,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
b66f3c90f539de1eb33ae4b3b6e84c86e67d644a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-covid
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906070 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:54:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2-covid", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T20:57:24+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-covid
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-covid\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-covid\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
96,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-covid\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
ec55bc782a252819ffe12f8097640286e5130157 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906071 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:55:58+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/roberta-base-squad2-distilled", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T20:58:49+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
97,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
85b74f86f553a969c7d22d22ee177c07739ede2f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906072 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:57:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/xlm-roberta-base-squad2-distilled", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:01:20+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/xlm-roberta-base-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
100,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/xlm-roberta-base-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
6441cc0b487b62b88a44999da1d1a6df5051db1d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-cased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916074 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:58:30+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-cased-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:01:33+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-cased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-cased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-cased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
96,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-cased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
9e7039c7a58178ec63a3938b449bbd35ebf912df | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepakvk/roberta-base-squad2-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-76c05b-14906073 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T20:59:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepakvk/roberta-base-squad2-finetuned-squad", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:02:14+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepakvk/roberta-base-squad2-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepakvk/roberta-base-squad2-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepakvk/roberta-base-squad2-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
102,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepakvk/roberta-base-squad2-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
30825b4b2d8e9b5671ec15a8218bdda56f470b0b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-medium-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916077 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T21:01:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-medium-squad2-distilled", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:04:31+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-medium-squad2-distilled
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-medium-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-medium-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
97,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-medium-squad2-distilled\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
ac5b4b0694f05ab94ed402208b645204dbc7f685 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916076 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T21:03:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-base-uncased-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:06:09+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-base-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
97,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-base-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
70eb6800ed3b65b6ef9c1b424928669979a9e322 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-large-uncased-whole-word-masking-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916078 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T21:05:09+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/bert-large-uncased-whole-word-masking-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:10:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/bert-large-uncased-whole-word-masking-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-large-uncased-whole-word-masking-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-large-uncased-whole-word-masking-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
106,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/bert-large-uncased-whole-word-masking-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
4ae1a5e50013521e0d49bacbc0e4759230b2e0c7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916080 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T21:06:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-large-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:13:11+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-large-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
98,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-large-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
2455dc91a08af79fa79ed41e9a60ceec159629c0 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916075 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T21:09:08+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/tinybert-6l-768d-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:11:47+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/tinybert-6l-768d-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinybert-6l-768d-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinybert-6l-768d-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
97,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/tinybert-6l-768d-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
e3290585c7c08b65826dbf628bb64eb9e3d60e92 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916079 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T21:10:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/deberta-v3-base-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:14:24+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/deberta-v3-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
97,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/deberta-v3-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
6403e178c742dcd7c2b572e9e4df8f33577eb62d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916081 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T21:10:55+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/electra-base-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:14:00+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/electra-base-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/electra-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/electra-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
94,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/electra-base-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
6ec84a0ec5da70e845deca75ffa6141a28839907 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: deepset/minilm-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-38b250-14916082 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T21:12:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "deepset/minilm-uncased-squad2", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T21:15:10+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: deepset/minilm-uncased-squad2
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/minilm-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/minilm-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
96,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: deepset/minilm-uncased-squad2\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
ba06dc05a1b91c497f489bfa9793acdfb4ce06ec |
# Dataset Card for GLUE
## Table of Contents
- [Dataset Card for GLUE](#dataset-card-for-glue)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [ax](#ax)
- [cola](#cola)
- [mnli](#mnli)
- [mnli_matched](#mnli_matched)
- [mnli_mismatched](#mnli_mismatched)
- [mrpc](#mrpc)
- [qnli](#qnli)
- [qqp](#qqp)
- [rte](#rte)
- [sst2](#sst2)
- [stsb](#stsb)
- [wnli](#wnli)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [ax](#ax-1)
- [cola](#cola-1)
- [mnli](#mnli-1)
- [mnli_matched](#mnli_matched-1)
- [mnli_mismatched](#mnli_mismatched-1)
- [mrpc](#mrpc-1)
- [qnli](#qnli-1)
- [qqp](#qqp-1)
- [rte](#rte-1)
- [sst2](#sst2-1)
- [stsb](#stsb-1)
- [wnli](#wnli-1)
- [Data Fields](#data-fields)
- [ax](#ax-2)
- [cola](#cola-2)
- [mnli](#mnli-2)
- [mnli_matched](#mnli_matched-2)
- [mnli_mismatched](#mnli_mismatched-2)
- [mrpc](#mrpc-2)
- [qnli](#qnli-2)
- [qqp](#qqp-2)
- [rte](#rte-2)
- [sst2](#sst2-2)
- [stsb](#stsb-2)
- [wnli](#wnli-2)
- [Data Splits](#data-splits)
- [ax](#ax-3)
- [cola](#cola-3)
- [mnli](#mnli-3)
- [mnli_matched](#mnli_matched-3)
- [mnli_mismatched](#mnli_mismatched-3)
- [mrpc](#mrpc-3)
- [qnli](#qnli-3)
- [qqp](#qqp-3)
- [rte](#rte-3)
- [sst2](#sst2-3)
- [stsb](#stsb-3)
- [wnli](#wnli-3)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://nyu-mll.github.io/CoLA/](https://nyu-mll.github.io/CoLA/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 955.33 MB
- **Size of the generated dataset:** 229.68 MB
- **Total amount of disk used:** 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (https://gluebenchmark.com/) is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found [at this address](https://gluebenchmark.com/). It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 `en`)
## Dataset Structure
### Data Instances
#### ax
- **Size of downloaded dataset files:** 0.21 MB
- **Size of the generated dataset:** 0.23 MB
- **Total amount of disk used:** 0.44 MB
An example of 'test' looks as follows.
```
{
"premise": "The cat sat on the mat.",
"hypothesis": "The cat did not sit on the mat.",
"label": -1,
"idx: 0
}
```
#### cola
- **Size of downloaded dataset files:** 0.36 MB
- **Size of the generated dataset:** 0.58 MB
- **Total amount of disk used:** 0.94 MB
An example of 'train' looks as follows.
```
{
"sentence": "Our friends won't buy this analysis, let alone the next one we propose.",
"label": 1,
"id": 0
}
```
#### mnli
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 78.65 MB
- **Total amount of disk used:** 376.95 MB
An example of 'train' looks as follows.
```
{
"premise": "Conceptually cream skimming has two basic dimensions - product and geography.",
"hypothesis": "Product and geography are what make cream skimming work.",
"label": 1,
"idx": 0
}
```
#### mnli_matched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.52 MB
- **Total amount of disk used:** 301.82 MB
An example of 'test' looks as follows.
```
{
"premise": "Hierbas, ans seco, ans dulce, and frigola are just a few names worth keeping a look-out for.",
"hypothesis": "Hierbas is a name worth looking out for.",
"label": -1,
"idx": 0
}
```
#### mnli_mismatched
- **Size of downloaded dataset files:** 298.29 MB
- **Size of the generated dataset:** 3.73 MB
- **Total amount of disk used:** 302.02 MB
An example of 'test' looks as follows.
```
{
"premise": "What have you decided, what are you going to do?",
"hypothesis": "So what's your decision?,
"label": -1,
"idx": 0
}
```
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Fields
The data fields are the same among all splits.
#### ax
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### cola
- `sentence`: a `string` feature.
- `label`: a classification label, with possible values including `unacceptable` (0), `acceptable` (1).
- `idx`: a `int32` feature.
#### mnli
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_matched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mnli_mismatched
- `premise`: a `string` feature.
- `hypothesis`: a `string` feature.
- `label`: a classification label, with possible values including `entailment` (0), `neutral` (1), `contradiction` (2).
- `idx`: a `int32` feature.
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Data Splits
#### ax
| |test|
|---|---:|
|ax |1104|
#### cola
| |train|validation|test|
|----|----:|---------:|---:|
|cola| 8551| 1043|1063|
#### mnli
| |train |validation_matched|validation_mismatched|test_matched|test_mismatched|
|----|-----:|-----------------:|--------------------:|-----------:|--------------:|
|mnli|392702| 9815| 9832| 9796| 9847|
#### mnli_matched
| |validation|test|
|------------|---------:|---:|
|mnli_matched| 9815|9796|
#### mnli_mismatched
| |validation|test|
|---------------|---------:|---:|
|mnli_mismatched| 9832|9847|
#### mrpc
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### qqp
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### rte
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### sst2
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### stsb
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### wnli
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{warstadt2018neural,
title={Neural Network Acceptability Judgments},
author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R},
journal={arXiv preprint arXiv:1805.12471},
year={2018}
}
@inproceedings{wang2019glue,
title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},
author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},
note={In the Proceedings of ICLR.},
year={2019}
}
Note that each GLUE dataset has its own citation. Please see the source to see
the correct citation for each contained dataset.
```
### Contributions
Thanks to [@patpizio](https://github.com/patpizio), [@jeswan](https://github.com/jeswan), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
| evaluate/glue-ci | [
"task_categories:text-classification",
"task_ids:acceptability-classification",
"task_ids:natural-language-inference",
"task_ids:semantic-similarity-scoring",
"task_ids:sentiment-classification",
"task_ids:text-scoring",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-08-31T21:17:54+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["acceptability-classification", "natural-language-inference", "semantic-similarity-scoring", "sentiment-classification", "text-classification-other-coreference-nli", "text-classification-other-paraphrase-identification", "text-classification-other-qa-nli", "text-scoring"], "paperswithcode_id": "glue", "pretty_name": "GLUE (General Language Understanding Evaluation benchmark)", "configs": ["ax", "cola", "mnli", "mnli_matched", "mnli_mismatched", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"], "train-eval-index": [{"config": "cola", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "sst2", "task": "text-classification", "task_id": "binary_classification", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence": "text", "label": "target"}}, {"config": "mrpc", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "qqp", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question1": "text1", "question2": "text2", "label": "target"}}, {"config": "stsb", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "mnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation_matched"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_mismatched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "mnli_matched", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"premise": "text1", "hypothesis": "text2", "label": "target"}}, {"config": "qnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"question": "text1", "sentence": "text2", "label": "target"}}, {"config": "rte", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}, {"config": "wnli", "task": "text-classification", "task_id": "natural_language_inference", "splits": {"train_split": "train", "eval_split": "validation"}, "col_mapping": {"sentence1": "text1", "sentence2": "text2", "label": "target"}}]} | 2022-09-15T19:12:43+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #region-us
| Dataset Card for GLUE
=====================
Table of Contents
-----------------
* Dataset Card for GLUE
+ Table of Contents
+ Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Languages
+ Dataset Structure
- Data Instances
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Data Fields
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
- Data Splits
* ax
* cola
* mnli
* mnli\_matched
* mnli\_mismatched
* mrpc
* qnli
* qqp
* rte
* sst2
* stsb
* wnli
+ Dataset Creation
- Curation Rationale
- Source Data
* Initial Data Collection and Normalization
* Who are the source language producers?
- Annotations
* Annotation process
* Who are the annotators?
- Personal and Sensitive Information
+ Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
+ Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
Dataset Description
-------------------
* Homepage: URL
* Repository:
* Paper:
* Point of Contact:
* Size of downloaded dataset files: 955.33 MB
* Size of the generated dataset: 229.68 MB
* Total amount of disk used: 1185.01 MB
### Dataset Summary
GLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.
### Supported Tasks and Leaderboards
The leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:
#### ax
A manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.
#### cola
The Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.
#### mnli
The Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.
#### mnli\_matched
The matched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mnli\_mismatched
The mismatched validation and test splits from MNLI. See the "mnli" BuilderConfig for additional information.
#### mrpc
The Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.
#### qnli
The Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.
#### qqp
The Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.
#### rte
The Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.
#### sst2
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.
#### stsb
The Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.
#### wnli
The Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).
### Languages
The language data in GLUE is in English (BCP-47 'en')
Dataset Structure
-----------------
### Data Instances
#### ax
* Size of downloaded dataset files: 0.21 MB
* Size of the generated dataset: 0.23 MB
* Total amount of disk used: 0.44 MB
An example of 'test' looks as follows.
#### cola
* Size of downloaded dataset files: 0.36 MB
* Size of the generated dataset: 0.58 MB
* Total amount of disk used: 0.94 MB
An example of 'train' looks as follows.
#### mnli
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 78.65 MB
* Total amount of disk used: 376.95 MB
An example of 'train' looks as follows.
#### mnli\_matched
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 3.52 MB
* Total amount of disk used: 301.82 MB
An example of 'test' looks as follows.
#### mnli\_mismatched
* Size of downloaded dataset files: 298.29 MB
* Size of the generated dataset: 3.73 MB
* Total amount of disk used: 302.02 MB
An example of 'test' looks as follows.
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
### Data Fields
The data fields are the same among all splits.
#### ax
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### cola
* 'sentence': a 'string' feature.
* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).
* 'idx': a 'int32' feature.
#### mnli
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mnli\_matched
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mnli\_mismatched
* 'premise': a 'string' feature.
* 'hypothesis': a 'string' feature.
* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).
* 'idx': a 'int32' feature.
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
### Data Splits
#### ax
#### cola
#### mnli
#### mnli\_matched
#### mnli\_mismatched
#### mrpc
#### qnli
#### qqp
#### rte
#### sst2
#### stsb
#### wnli
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset.
| [
"### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.",
"### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:",
"#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.",
"#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.",
"#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.",
"#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.",
"#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.",
"#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.",
"#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.",
"#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.",
"#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.",
"#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).",
"### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ax\n\n\n* Size of downloaded dataset files: 0.21 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.44 MB\n\n\nAn example of 'test' looks as follows.",
"#### cola\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.58 MB\n* Total amount of disk used: 0.94 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 78.65 MB\n* Total amount of disk used: 376.95 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.52 MB\n* Total amount of disk used: 301.82 MB\n\n\nAn example of 'test' looks as follows.",
"#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.73 MB\n* Total amount of disk used: 302.02 MB\n\n\nAn example of 'test' looks as follows.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.",
"#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Splits",
"#### ax",
"#### cola",
"#### mnli",
"#### mnli\\_matched",
"#### mnli\\_mismatched",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n",
"### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.",
"### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:",
"#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.",
"#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.",
"#### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.",
"#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.",
"#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.",
"#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.",
"#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.",
"#### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.",
"#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.",
"#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.",
"#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).",
"### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"#### ax\n\n\n* Size of downloaded dataset files: 0.21 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.44 MB\n\n\nAn example of 'test' looks as follows.",
"#### cola\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.58 MB\n* Total amount of disk used: 0.94 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 78.65 MB\n* Total amount of disk used: 376.95 MB\n\n\nAn example of 'train' looks as follows.",
"#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.52 MB\n* Total amount of disk used: 301.82 MB\n\n\nAn example of 'test' looks as follows.",
"#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.73 MB\n* Total amount of disk used: 302.02 MB\n\n\nAn example of 'test' looks as follows.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Fields\n\n\nThe data fields are the same among all splits.",
"#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.",
"#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_matched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mnli\\_mismatched\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli",
"### Data Splits",
"#### ax",
"#### cola",
"#### mnli",
"#### mnli\\_matched",
"#### mnli\\_mismatched",
"#### mrpc",
"#### qnli",
"#### qqp",
"#### rte",
"#### sst2",
"#### stsb",
"#### wnli\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\n\nThanks to @patpizio, @jeswan, @thomwolf, @patrickvonplaten, @mariamabarham for adding this dataset."
] | [
136,
39,
33,
71,
54,
185,
38,
40,
55,
183,
48,
153,
65,
67,
324,
28,
6,
50,
50,
55,
57,
58,
4,
5,
5,
4,
5,
5,
5,
17,
76,
55,
77,
81,
82,
4,
5,
5,
4,
5,
5,
5,
5,
4,
3,
5,
9,
10,
4,
5,
5,
4,
5,
5,
11,
7,
4,
10,
10,
5,
5,
9,
18,
7,
8,
14,
6,
6,
39
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-acceptability-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-sentiment-classification #task_ids-text-scoring #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n### Dataset Summary\n\n\nGLUE, the General Language Understanding Evaluation benchmark (URL is a collection of resources for training, evaluating, and analyzing natural language understanding systems.### Supported Tasks and Leaderboards\n\n\nThe leaderboard for the GLUE benchmark can be found at this address. It comprises the following tasks:#### ax\n\n\nA manually-curated evaluation dataset for fine-grained analysis of system performance on a broad range of linguistic phenomena. This dataset evaluates sentence understanding through Natural Language Inference (NLI) problems. Use a model trained on MulitNLI to produce predictions for this dataset.#### cola\n\n\nThe Corpus of Linguistic Acceptability consists of English acceptability judgments drawn from books and journal articles on linguistic theory. Each example is a sequence of words annotated with whether it is a grammatical English sentence.",
"passage: #### mnli\n\n\nThe Multi-Genre Natural Language Inference Corpus is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypothesis (contradiction), or neither (neutral). The premise sentences are gathered from ten different sources, including transcribed speech, fiction, and government reports. The authors of the benchmark use the standard test set, for which they obtained private labels from the RTE authors, and evaluate on both the matched (in-domain) and mismatched (cross-domain) section. They also uses and recommend the SNLI corpus as 550k examples of auxiliary training data.#### mnli\\_matched\n\n\nThe matched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.#### mnli\\_mismatched\n\n\nThe mismatched validation and test splits from MNLI. See the \"mnli\" BuilderConfig for additional information.#### mrpc\n\n\nThe Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is a corpus of sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent.#### qnli\n\n\nThe Stanford Question Answering Dataset is a question-answering dataset consisting of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The authors of the benchmark convert the task into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue.#### qqp\n\n\nThe Quora Question Pairs2 dataset is a collection of question pairs from the community question-answering website Quora. The task is to determine whether a pair of questions are semantically equivalent.",
"passage: #### rte\n\n\nThe Recognizing Textual Entailment (RTE) datasets come from a series of annual textual entailment challenges. The authors of the benchmark combined the data from RTE1 (Dagan et al., 2006), RTE2 (Bar Haim et al., 2006), RTE3 (Giampiccolo et al., 2007), and RTE5 (Bentivogli et al., 2009). Examples are constructed based on news and Wikipedia text. The authors of the benchmark convert all datasets to a two-class split, where for three-class datasets they collapse neutral and contradiction into not entailment, for consistency.#### sst2\n\n\nThe Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.#### stsb\n\n\nThe Semantic Textual Similarity Benchmark (Cer et al., 2017) is a collection of sentence pairs drawn from news headlines, video and image captions, and natural language inference data. Each pair is human-annotated with a similarity score from 1 to 5.#### wnli\n\n\nThe Winograd Schema Challenge (Levesque et al., 2011) is a reading comprehension task in which a system must read a sentence with a pronoun and select the referent of that pronoun from a list of choices. The examples are manually constructed to foil simple statistical methods: Each one is contingent on contextual information provided by a single word or phrase in the sentence. To convert the problem into sentence pair classification, the authors of the benchmark construct sentence pairs by replacing the ambiguous pronoun with each possible referent. The task is to predict if the sentence with the pronoun substituted is entailed by the original sentence. They use a small evaluation set consisting of new examples derived from fiction books that was shared privately by the authors of the original corpus. While the included training set is balanced between two classes, the test set is imbalanced between them (65% not entailment). Also, due to a data quirk, the development set is adversarial: hypotheses are sometimes shared between training and development examples, so if a model memorizes the training examples, they will predict the wrong label on corresponding development set example. As with QNLI, each example is evaluated separately, so there is not a systematic correspondence between a model's score on this task and its score on the unconverted original task. The authors of the benchmark call converted dataset WNLI (Winograd NLI).### Languages\n\n\nThe language data in GLUE is in English (BCP-47 'en')\n\n\nDataset Structure\n-----------------### Data Instances",
"passage: #### ax\n\n\n* Size of downloaded dataset files: 0.21 MB\n* Size of the generated dataset: 0.23 MB\n* Total amount of disk used: 0.44 MB\n\n\nAn example of 'test' looks as follows.#### cola\n\n\n* Size of downloaded dataset files: 0.36 MB\n* Size of the generated dataset: 0.58 MB\n* Total amount of disk used: 0.94 MB\n\n\nAn example of 'train' looks as follows.#### mnli\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 78.65 MB\n* Total amount of disk used: 376.95 MB\n\n\nAn example of 'train' looks as follows.#### mnli\\_matched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.52 MB\n* Total amount of disk used: 301.82 MB\n\n\nAn example of 'test' looks as follows.#### mnli\\_mismatched\n\n\n* Size of downloaded dataset files: 298.29 MB\n* Size of the generated dataset: 3.73 MB\n* Total amount of disk used: 302.02 MB\n\n\nAn example of 'test' looks as follows.#### mrpc#### qnli#### qqp#### rte#### sst2#### stsb#### wnli### Data Fields\n\n\nThe data fields are the same among all splits.#### ax\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature.#### cola\n\n\n* 'sentence': a 'string' feature.\n* 'label': a classification label, with possible values including 'unacceptable' (0), 'acceptable' (1).\n* 'idx': a 'int32' feature.#### mnli\n\n\n* 'premise': a 'string' feature.\n* 'hypothesis': a 'string' feature.\n* 'label': a classification label, with possible values including 'entailment' (0), 'neutral' (1), 'contradiction' (2).\n* 'idx': a 'int32' feature."
] |
fdf89d9ab61732bcb253768750a35dcf7bba9a9e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: ptnv-s/biobert_squad2_cased-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c9381c-14936084 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T22:13:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "ptnv-s/biobert_squad2_cased-finetuned-squad", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T22:16:12+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: ptnv-s/biobert_squad2_cased-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: ptnv-s/biobert_squad2_cased-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: ptnv-s/biobert_squad2_cased-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
105,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: ptnv-s/biobert_squad2_cased-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
f73936e33d1c4ee021cb17b21e16ffff0ca95b80 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: gerardozq/biobert_v1.1_pubmed-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@nonchalant-nagavalli](https://huggingface.co/nonchalant-nagavalli) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-c9381c-14936085 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T22:46:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "gerardozq/biobert_v1.1_pubmed-finetuned-squad", "metrics": ["bertscore"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-08-31T22:49:22+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: gerardozq/biobert_v1.1_pubmed-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @nonchalant-nagavalli for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: gerardozq/biobert_v1.1_pubmed-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: gerardozq/biobert_v1.1_pubmed-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] | [
13,
103,
20
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: gerardozq/biobert_v1.1_pubmed-finetuned-squad\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @nonchalant-nagavalli for evaluating this model."
] |
d5bf79983aff9a4a44953c5edf97a05393c8ab58 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-d7ce16-14946086 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-08-31T23:03:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["mse"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-01T00:06:48+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @samuelallen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
13,
88,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] |
19c35918209a49548c54478695bbe6b8f0dc758e | This is the dataset used to post-train the [BERTweet](https://huggingface.co/cardiffnlp/twitter-roberta-base) language model on a Masked Language Modeling (MLM) task, resulting in the [CryptoBERT](https://huggingface.co/ElKulako/cryptobert) language model.
The dataset contains 3.207 million unique posts from the language domain of cryptocurrency-related social media text.
The dataset contains 1.865 million StockTwits posts, 496 thousand tweets, 172 thousand Reddit comments and 664 thousand Telegram messages. | ElKulako/cryptobert-posttrain | [
"license:afl-3.0",
"region:us"
] | 2022-09-01T03:10:42+00:00 | {"license": "afl-3.0"} | 2022-09-01T03:22:42+00:00 | [] | [] | TAGS
#license-afl-3.0 #region-us
| This is the dataset used to post-train the BERTweet language model on a Masked Language Modeling (MLM) task, resulting in the CryptoBERT language model.
The dataset contains 3.207 million unique posts from the language domain of cryptocurrency-related social media text.
The dataset contains 1.865 million StockTwits posts, 496 thousand tweets, 172 thousand Reddit comments and 664 thousand Telegram messages. | [] | [
"TAGS\n#license-afl-3.0 #region-us \n"
] | [
14
] | [
"passage: TAGS\n#license-afl-3.0 #region-us \n"
] |
8982dbea4a595589b7ebe46b3d7eec6707eeea16 |
# Dataset Card for environmental_claims
## Dataset Description
- **Homepage:** [climatebert.ai](https://climatebert.ai)
- **Repository:**
- **Paper:** [arxiv.org/abs/2209.00507](https://arxiv.org/abs/2209.00507)
- **Leaderboard:**
- **Point of Contact:** [Dominik Stammbach](mailto:[email protected])
### Dataset Summary
We introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given sentence is an environmental claim or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
```
{
"text": "It will enable E.ON to acquire and leverage a comprehensive understanding of the transfor- mation of the energy system and the interplay between the individual submarkets in regional and local energy supply sys- tems.",
"label": 0
}
```
### Data Fields
- text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts
- label: the label (0 -> no environmental claim, 1 -> environmental claim)
### Data Splits
The dataset is split into:
- train: 2,400
- validation: 300
- test: 300
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts.
For more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for [citation](#citation-information).
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for [citation](#citation-information).
#### Who are the annotators?
The authors and students at University of Zurich with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
- Dominik Stammbach
- Nicolas Webersinke
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit [creativecommons.org/licenses/by-nc-sa/4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
If you are interested in commercial use of the dataset, please contact [[email protected]](mailto:[email protected]).
### Citation Information
```bibtex
@misc{stammbach2022environmentalclaims,
title = {A Dataset for Detecting Real-World Environmental Claims},
author = {Stammbach, Dominik and Webersinke, Nicolas and Bingler, Julia Anna and Kraus, Mathias and Leippold, Markus},
year = {2022},
doi = {10.48550/ARXIV.2209.00507},
url = {https://arxiv.org/abs/2209.00507},
publisher = {arXiv},
}
```
### Contributions
Thanks to [@webersni](https://github.com/webersni) for adding this dataset. | climatebert/environmental_claims | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2209.00507",
"region:us"
] | 2022-09-01T13:19:17+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "EnvironmentalClaims", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "no", "1": "yes"}}}}], "splits": [{"name": "train", "num_bytes": 346686, "num_examples": 2117}, {"name": "validation", "num_bytes": 43018, "num_examples": 265}, {"name": "test", "num_bytes": 42810, "num_examples": 265}], "download_size": 272422, "dataset_size": 432514}} | 2023-05-23T07:53:10+00:00 | [
"2209.00507"
] | [
"en"
] | TAGS
#task_categories-text-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #arxiv-2209.00507 #region-us
|
# Dataset Card for environmental_claims
## Dataset Description
- Homepage: URL
- Repository:
- Paper: URL
- Leaderboard:
- Point of Contact: Dominik Stammbach
### Dataset Summary
We introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies.
### Supported Tasks and Leaderboards
The dataset supports a binary classification task of whether a given sentence is an environmental claim or not.
### Languages
The text in the dataset is in English.
## Dataset Structure
### Data Instances
### Data Fields
- text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts
- label: the label (0 -> no environmental claim, 1 -> environmental claim)
### Data Splits
The dataset is split into:
- train: 2,400
- validation: 300
- test: 300
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
Our dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts.
For more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for citation.
#### Who are the source language producers?
Mainly large listed companies.
### Annotations
#### Annotation process
For more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for citation.
#### Who are the annotators?
The authors and students at University of Zurich with majors in finance and sustainable finance.
### Personal and Sensitive Information
Since our text sources contain public information, no personal and sensitive information should be included.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
- Dominik Stammbach
- Nicolas Webersinke
- Julia Anna Bingler
- Mathias Kraus
- Markus Leippold
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit URL
If you are interested in commercial use of the dataset, please contact markus.leippold@URL.
### Contributions
Thanks to @webersni for adding this dataset. | [
"# Dataset Card for environmental_claims",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Dominik Stammbach",
"### Dataset Summary\n\nWe introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies.",
"### Supported Tasks and Leaderboards\n\nThe dataset supports a binary classification task of whether a given sentence is an environmental claim or not.",
"### Languages\n\nThe text in the dataset is in English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts\n- label: the label (0 -> no environmental claim, 1 -> environmental claim)",
"### Data Splits\n\nThe dataset is split into:\n- train: 2,400\n- validation: 300\n- test: 300",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nOur dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts.\n\nFor more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for citation.",
"#### Who are the source language producers?\n\nMainly large listed companies.",
"### Annotations",
"#### Annotation process\n\nFor more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for citation.",
"#### Who are the annotators?\n\nThe authors and students at University of Zurich with majors in finance and sustainable finance.",
"### Personal and Sensitive Information\n\nSince our text sources contain public information, no personal and sensitive information should be included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\n- Dominik Stammbach\n- Nicolas Webersinke\n- Julia Anna Bingler\n- Mathias Kraus\n- Markus Leippold",
"### Licensing Information\n\nThis dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit URL\n\nIf you are interested in commercial use of the dataset, please contact markus.leippold@URL.",
"### Contributions\n\nThanks to @webersni for adding this dataset."
] | [
"TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #arxiv-2209.00507 #region-us \n",
"# Dataset Card for environmental_claims",
"## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Dominik Stammbach",
"### Dataset Summary\n\nWe introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies.",
"### Supported Tasks and Leaderboards\n\nThe dataset supports a binary classification task of whether a given sentence is an environmental claim or not.",
"### Languages\n\nThe text in the dataset is in English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts\n- label: the label (0 -> no environmental claim, 1 -> environmental claim)",
"### Data Splits\n\nThe dataset is split into:\n- train: 2,400\n- validation: 300\n- test: 300",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nOur dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts.\n\nFor more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for citation.",
"#### Who are the source language producers?\n\nMainly large listed companies.",
"### Annotations",
"#### Annotation process\n\nFor more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for citation.",
"#### Who are the annotators?\n\nThe authors and students at University of Zurich with majors in finance and sustainable finance.",
"### Personal and Sensitive Information\n\nSince our text sources contain public information, no personal and sensitive information should be included.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators\n\n- Dominik Stammbach\n- Nicolas Webersinke\n- Julia Anna Bingler\n- Mathias Kraus\n- Markus Leippold",
"### Licensing Information\n\nThis dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (cc-by-nc-sa-4.0). To view a copy of this license, visit URL\n\nIf you are interested in commercial use of the dataset, please contact markus.leippold@URL.",
"### Contributions\n\nThanks to @webersni for adding this dataset."
] | [
91,
9,
30,
29,
33,
14,
6,
6,
44,
27,
5,
7,
4,
74,
16,
5,
36,
28,
25,
8,
7,
8,
7,
5,
31,
71,
17
] | [
"passage: TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-sa-4.0 #arxiv-2209.00507 #region-us \n# Dataset Card for environmental_claims## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper: URL\n- Leaderboard:\n- Point of Contact: Dominik Stammbach### Dataset Summary\n\nWe introduce an expert-annotated dataset for detecting real-world environmental claims made by listed companies.### Supported Tasks and Leaderboards\n\nThe dataset supports a binary classification task of whether a given sentence is an environmental claim or not.### Languages\n\nThe text in the dataset is in English.## Dataset Structure### Data Instances### Data Fields\n\n- text: a sentence extracted from corporate annual reports, sustainability reports and earning calls transcripts\n- label: the label (0 -> no environmental claim, 1 -> environmental claim)### Data Splits\n\nThe dataset is split into:\n- train: 2,400\n- validation: 300\n- test: 300## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization\n\nOur dataset contains environmental claims by firms, often in the financial domain. We collect text from corporate annual reports, sustainability reports, and earning calls transcripts.\n\nFor more information regarding our sample selection, please refer to Appendix B of our paper, which is provided for citation.#### Who are the source language producers?\n\nMainly large listed companies.### Annotations#### Annotation process\n\nFor more information on our annotation process and annotation guidelines, please refer to Appendix C of our paper, which is provided for citation.#### Who are the annotators?\n\nThe authors and students at University of Zurich with majors in finance and sustainable finance.### Personal and Sensitive Information\n\nSince our text sources contain public information, no personal and sensitive information should be included.## Considerations for Using the Data### Social Impact of Dataset"
] |
4bce21b1f9211f24ff5ec321db8ea10894e3f425 |
# Dataset Card for "cardiffnlp/tweet_topic_multi"
## Dataset Description
- **Paper:** [https://arxiv.org/abs/2209.09824](https://arxiv.org/abs/2209.09824)
- **Dataset:** Tweet Topic Dataset
- **Domain:** Twitter
- **Number of Class:** 19
### Dataset Summary
This is the official repository of TweetTopic (["Twitter Topic Classification
, COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 19 labels.
Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.
See [cardiffnlp/tweet_topic_single](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single) for single label version of TweetTopic.
The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7).
The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too.
### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`.
For verified usernames, we replace its display name (or account name) with symbols `{@}`.
For example, a tweet
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from @herbiehancock
via @bluenoterecords link below:
http://bluenote.lnk.to/AlbumOfTheWeek
```
is transformed into the following text.
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}
```
A simple function to format tweet follows below.
```python
import re
from urlextract import URLExtract
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
target_format = format_tweet(target)
print(target_format)
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'
```
### Data Splits
| split | number of texts | description |
|:------------------------|-----:|------:|
| test_2020 | 573 | test dataset from September 2019 to August 2020 |
| test_2021 | 1679 | test dataset from September 2020 to August 2021 |
| train_2020 | 4585 | training dataset from September 2019 to August 2020 |
| train_2021 | 1505 | training dataset from September 2020 to August 2021 |
| train_all | 6090 | combined training dataset of `train_2020` and `train_2021` |
| validation_2020 | 573 | validation dataset from September 2019 to August 2020 |
| validation_2021 | 188 | validation dataset from September 2020 to August 2021 |
| train_random | 4564 | randomly sampled training dataset with the same size as `train_2020` from `train_all` |
| validation_random | 573 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` |
| test_coling2022_random | 5536 | random split used in the COLING 2022 paper |
| train_coling2022_random | 5731 | random split used in the COLING 2022 paper |
| test_coling2022 | 5536 | temporal split used in the COLING 2022 paper |
| train_coling2022 | 5731 | temporal split used in the COLING 2022 paper |
For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`.
In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`.
**IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set).
### Models
| model | training data | F1 | F1 (macro) | Accuracy |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:|
| [cardiffnlp/roberta-large-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-all) | all (2020 + 2021) | 0.763104 | 0.620257 | 0.536629 |
| [cardiffnlp/roberta-base-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-all) | all (2020 + 2021) | 0.751814 | 0.600782 | 0.531864 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-all) | all (2020 + 2021) | 0.762513 | 0.603533 | 0.547945 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-all) | all (2020 + 2021) | 0.759917 | 0.59901 | 0.536033 |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-all) | all (2020 + 2021) | 0.764767 | 0.618702 | 0.548541 |
| [cardiffnlp/roberta-large-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-multi-2020) | 2020 only | 0.732366 | 0.579456 | 0.493746 |
| [cardiffnlp/roberta-base-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-multi-2020) | 2020 only | 0.725229 | 0.561261 | 0.499107 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-multi-2020) | 2020 only | 0.73671 | 0.565624 | 0.513401 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-multi-2020) | 2020 only | 0.729446 | 0.534799 | 0.50268 |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-multi-2020) | 2020 only | 0.731106 | 0.532141 | 0.509827 |
Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/lm_finetuning.py).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```python
{
"date": "2021-03-07",
"text": "The latest The Movie theater Daily! {{URL}} Thanks to {{USERNAME}} {{USERNAME}} {{USERNAME}} #lunchtimeread #amc1000",
"id": "1368464923370676231",
"label": [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
"label_name": ["film_tv_&_video"]
}
```
### Labels
| <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> |
|-----------------------------|---------------------|----------------------------|--------------------------|
| 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports |
| 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure |
| 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life |
| 4: family | 9: gaming | 14: relationships | |
Annotation instructions can be found [here](https://docs.google.com/document/d/1IaIXZYof3iCLLxyBdu_koNmjy--zqsuOmxQ2vOxYd_g/edit?usp=sharing).
The label2id dictionary can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi/blob/main/dataset/label.multi.json).
### Citation Information
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
``` | cardiffnlp/tweet_topic_multi | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"arxiv:2209.09824",
"region:us"
] | 2022-09-01T13:30:46+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "TweetTopicSingle"} | 2024-01-17T14:54:48+00:00 | [
"2209.09824"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2209.09824 #region-us
| Dataset Card for "cardiffnlp/tweet\_topic\_multi"
=================================================
Dataset Description
-------------------
* Paper: URL
* Dataset: Tweet Topic Dataset
* Domain: Twitter
* Number of Class: 19
### Dataset Summary
This is the official repository of TweetTopic ("Twitter Topic Classification
, COLING main conference 2022"), a topic classification dataset on Twitter with 19 labels.
Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.
See cardiffnlp/tweet\_topic\_single for single label version of TweetTopic.
The tweet collection used in TweetTopic is same as what used in TweetNER7.
The dataset is integrated in TweetNLP too.
### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.
For verified usernames, we replace its display name (or account name) with symbols '{@}'.
For example, a tweet
is transformed into the following text.
A simple function to format tweet follows below.
### Data Splits
For the temporal-shift setting, model should be trained on 'train\_2020' with 'validation\_2020' and evaluate on 'test\_2021'.
In general, model would be trained on 'train\_all', the most representative training set with 'validation\_2021' and evaluate on 'test\_2021'.
IMPORTANT NOTE: To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use 'train\_coling2022' and 'test\_coling2022' for temporal-shift, and 'train\_coling2022\_random' and 'test\_coling2022\_random' fir random split (the coling2022 split does not have validation set).
### Models
Model fine-tuning script can be found here.
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Labels
Annotation instructions can be found here.
The label2id dictionary can be found here.
| [
"### Dataset Summary\n\n\nThis is the official repository of TweetTopic (\"Twitter Topic Classification\n, COLING main conference 2022\"), a topic classification dataset on Twitter with 19 labels.\nEach instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.\nSee cardiffnlp/tweet\\_topic\\_single for single label version of TweetTopic.\nThe tweet collection used in TweetTopic is same as what used in TweetNER7.\nThe dataset is integrated in TweetNLP too.",
"### Preprocessing\n\n\nWe pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.\nFor verified usernames, we replace its display name (or account name) with symbols '{@}'.\nFor example, a tweet\n\n\nis transformed into the following text.\n\n\nA simple function to format tweet follows below.",
"### Data Splits\n\n\n\nFor the temporal-shift setting, model should be trained on 'train\\_2020' with 'validation\\_2020' and evaluate on 'test\\_2021'.\nIn general, model would be trained on 'train\\_all', the most representative training set with 'validation\\_2021' and evaluate on 'test\\_2021'.\n\n\nIMPORTANT NOTE: To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use 'train\\_coling2022' and 'test\\_coling2022' for temporal-shift, and 'train\\_coling2022\\_random' and 'test\\_coling2022\\_random' fir random split (the coling2022 split does not have validation set).",
"### Models\n\n\n\nModel fine-tuning script can be found here.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Labels\n\n\n\nAnnotation instructions can be found here.\n\n\nThe label2id dictionary can be found here."
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2209.09824 #region-us \n",
"### Dataset Summary\n\n\nThis is the official repository of TweetTopic (\"Twitter Topic Classification\n, COLING main conference 2022\"), a topic classification dataset on Twitter with 19 labels.\nEach instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.\nSee cardiffnlp/tweet\\_topic\\_single for single label version of TweetTopic.\nThe tweet collection used in TweetTopic is same as what used in TweetNER7.\nThe dataset is integrated in TweetNLP too.",
"### Preprocessing\n\n\nWe pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.\nFor verified usernames, we replace its display name (or account name) with symbols '{@}'.\nFor example, a tweet\n\n\nis transformed into the following text.\n\n\nA simple function to format tweet follows below.",
"### Data Splits\n\n\n\nFor the temporal-shift setting, model should be trained on 'train\\_2020' with 'validation\\_2020' and evaluate on 'test\\_2021'.\nIn general, model would be trained on 'train\\_all', the most representative training set with 'validation\\_2021' and evaluate on 'test\\_2021'.\n\n\nIMPORTANT NOTE: To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use 'train\\_coling2022' and 'test\\_coling2022' for temporal-shift, and 'train\\_coling2022\\_random' and 'test\\_coling2022\\_random' fir random split (the coling2022 split does not have validation set).",
"### Models\n\n\n\nModel fine-tuning script can be found here.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Labels\n\n\n\nAnnotation instructions can be found here.\n\n\nThe label2id dictionary can be found here."
] | [
64,
118,
103,
181,
22,
18,
23
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2209.09824 #region-us \n### Dataset Summary\n\n\nThis is the official repository of TweetTopic (\"Twitter Topic Classification\n, COLING main conference 2022\"), a topic classification dataset on Twitter with 19 labels.\nEach instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.\nSee cardiffnlp/tweet\\_topic\\_single for single label version of TweetTopic.\nThe tweet collection used in TweetTopic is same as what used in TweetNER7.\nThe dataset is integrated in TweetNLP too.### Preprocessing\n\n\nWe pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.\nFor verified usernames, we replace its display name (or account name) with symbols '{@}'.\nFor example, a tweet\n\n\nis transformed into the following text.\n\n\nA simple function to format tweet follows below.### Data Splits\n\n\n\nFor the temporal-shift setting, model should be trained on 'train\\_2020' with 'validation\\_2020' and evaluate on 'test\\_2021'.\nIn general, model would be trained on 'train\\_all', the most representative training set with 'validation\\_2021' and evaluate on 'test\\_2021'.\n\n\nIMPORTANT NOTE: To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use 'train\\_coling2022' and 'test\\_coling2022' for temporal-shift, and 'train\\_coling2022\\_random' and 'test\\_coling2022\\_random' fir random split (the coling2022 split does not have validation set).### Models\n\n\n\nModel fine-tuning script can be found here.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows."
] |
2e11493c1b92c66b3d718b39d13d21c0bcbab1ba | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: bhadresh-savani/roberta-base-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@gmoney](https://huggingface.co/gmoney) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-139135-14996090 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-01T14:39:13+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "bhadresh-savani/roberta-base-emotion", "metrics": ["roc_auc", "mae"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-09-01T14:39:48+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: bhadresh-savani/roberta-base-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @gmoney for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/roberta-base-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @gmoney for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/roberta-base-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @gmoney for evaluating this model."
] | [
13,
91,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: bhadresh-savani/roberta-base-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @gmoney for evaluating this model."
] |
aff1661b05d3101c728c5383a9c84111d2e1349f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Multi-class Text Classification
* Model: ericntay/bert-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@gmoney](https://huggingface.co/gmoney) for evaluating this model. | autoevaluate/autoeval-staging-eval-emotion-default-139135-14996091 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-01T14:39:20+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["emotion"], "eval_info": {"task": "multi_class_classification", "model": "ericntay/bert-finetuned-emotion", "metrics": ["roc_auc", "mae"], "dataset_name": "emotion", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "label"}}} | 2022-09-01T14:39:53+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Multi-class Text Classification
* Model: ericntay/bert-finetuned-emotion
* Dataset: emotion
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @gmoney for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: ericntay/bert-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @gmoney for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: ericntay/bert-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @gmoney for evaluating this model."
] | [
13,
90,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Multi-class Text Classification\n* Model: ericntay/bert-finetuned-emotion\n* Dataset: emotion\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @gmoney for evaluating this model."
] |
1e636ac88c46ec15dafa23d63d5d28ce8f03df9a |
Read this [BLOG](https://neuralmagic.com/blog/classifying-finance-tweets-in-real-time-with-sparse-transformers/) to see how I fine-tuned a sparse transformer on this dataset.
### Dataset Description
The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment.
1. The dataset holds 11,932 documents annotated with 3 labels:
```python
sentiments = {
"LABEL_0": "Bearish",
"LABEL_1": "Bullish",
"LABEL_2": "Neutral"
}
```
The data was collected using the Twitter API. The current dataset supports the multi-class classification task.
### Task: Sentiment Analysis
# Data Splits
There are 2 splits: train and validation. Below are the statistics:
| Dataset Split | Number of Instances in Split |
| ------------- | ------------------------------------------- |
| Train | 9,938 |
| Validation | 2,486 |
# Licensing Information
The Twitter Financial Dataset (sentiment) version 1.0.0 is released under the MIT License. | zeroshot/twitter-financial-news-sentiment | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:en",
"license:mit",
"twitter",
"finance",
"markets",
"stocks",
"wallstreet",
"quant",
"hedgefunds",
"region:us"
] | 2022-09-01T20:21:56+00:00 | {"annotations_creators": ["other"], "language_creators": ["other"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "twitter financial news", "tags": ["twitter", "finance", "markets", "stocks", "wallstreet", "quant", "hedgefunds", "markets"]} | 2022-12-12T14:32:59+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #twitter #finance #markets #stocks #wallstreet #quant #hedgefunds #region-us
| Read this BLOG to see how I fine-tuned a sparse transformer on this dataset.
### Dataset Description
The Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment.
1. The dataset holds 11,932 documents annotated with 3 labels:
The data was collected using the Twitter API. The current dataset supports the multi-class classification task.
### Task: Sentiment Analysis
Data Splits
===========
There are 2 splits: train and validation. Below are the statistics:
Licensing Information
=====================
The Twitter Financial Dataset (sentiment) version 1.0.0 is released under the MIT License.
| [
"### Dataset Description\n\n\nThe Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment.\n\n\n1. The dataset holds 11,932 documents annotated with 3 labels:\n\n\nThe data was collected using the Twitter API. The current dataset supports the multi-class classification task.",
"### Task: Sentiment Analysis\n\n\nData Splits\n===========\n\n\nThere are 2 splits: train and validation. Below are the statistics:\n\n\n\nLicensing Information\n=====================\n\n\nThe Twitter Financial Dataset (sentiment) version 1.0.0 is released under the MIT License."
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #twitter #finance #markets #stocks #wallstreet #quant #hedgefunds #region-us \n",
"### Dataset Description\n\n\nThe Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment.\n\n\n1. The dataset holds 11,932 documents annotated with 3 labels:\n\n\nThe data was collected using the Twitter API. The current dataset supports the multi-class classification task.",
"### Task: Sentiment Analysis\n\n\nData Splits\n===========\n\n\nThere are 2 splits: train and validation. Below are the statistics:\n\n\n\nLicensing Information\n=====================\n\n\nThe Twitter Financial Dataset (sentiment) version 1.0.0 is released under the MIT License."
] | [
106,
92,
61
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-other #language_creators-other #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-English #license-mit #twitter #finance #markets #stocks #wallstreet #quant #hedgefunds #region-us \n### Dataset Description\n\n\nThe Twitter Financial News dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment.\n\n\n1. The dataset holds 11,932 documents annotated with 3 labels:\n\n\nThe data was collected using the Twitter API. The current dataset supports the multi-class classification task.### Task: Sentiment Analysis\n\n\nData Splits\n===========\n\n\nThere are 2 splits: train and validation. Below are the statistics:\n\n\n\nLicensing Information\n=====================\n\n\nThe Twitter Financial Dataset (sentiment) version 1.0.0 is released under the MIT License."
] |
87b7a0d1c402dbb481db649569c556d9aa27ac05 |
# Dataset Card for "cardiffnlp/tweet_topic_single"
## Dataset Description
- **Paper:** [https://arxiv.org/abs/2209.09824](https://arxiv.org/abs/2209.09824)
- **Dataset:** Tweet Topic Dataset
- **Domain:** Twitter
- **Number of Class:** 6
### Dataset Summary
This is the official repository of TweetTopic (["Twitter Topic Classification
, COLING main conference 2022"](https://arxiv.org/abs/2209.09824)), a topic classification dataset on Twitter with 6 labels.
Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.
See [cardiffnlp/tweet_topic_multi](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi) for multi label version of TweetTopic.
The tweet collection used in TweetTopic is same as what used in [TweetNER7](https://huggingface.co/datasets/tner/tweetner7).
The dataset is integrated in [TweetNLP](https://tweetnlp.org/) too.
### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token `{{URL}}` and non-verified usernames into `{{USERNAME}}`.
For verified usernames, we replace its display name (or account name) with symbols `{@}`.
For example, a tweet
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from @herbiehancock
via @bluenoterecords link below:
http://bluenote.lnk.to/AlbumOfTheWeek
```
is transformed into the following text.
```
Get the all-analog Classic Vinyl Edition
of "Takin' Off" Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}
```
A simple function to format tweet follows below.
```python
import re
from urlextract import URLExtract
extractor = URLExtract()
def format_tweet(tweet):
# mask web urls
urls = extractor.find_urls(tweet)
for url in urls:
tweet = tweet.replace(url, "{{URL}}")
# format twitter account
tweet = re.sub(r"\b(\s*)(@[\S]+)\b", r'\1{\2@}', tweet)
return tweet
target = """Get the all-analog Classic Vinyl Edition of "Takin' Off" Album from @herbiehancock via @bluenoterecords link below: http://bluenote.lnk.to/AlbumOfTheWeek"""
target_format = format_tweet(target)
print(target_format)
'Get the all-analog Classic Vinyl Edition of "Takin\' Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}'
```
### Data Splits
| split | number of texts | description |
|:------------------------|-----:|------:|
| test_2020 | 376 | test dataset from September 2019 to August 2020 |
| test_2021 | 1693 | test dataset from September 2020 to August 2021 |
| train_2020 | 2858 | training dataset from September 2019 to August 2020 |
| train_2021 | 1516 | training dataset from September 2020 to August 2021 |
| train_all | 4374 | combined training dataset of `train_2020` and `train_2021` |
| validation_2020 | 352 | validation dataset from September 2019 to August 2020 |
| validation_2021 | 189 | validation dataset from September 2020 to August 2021 |
| train_random | 2830 | randomly sampled training dataset with the same size as `train_2020` from `train_all` |
| validation_random | 354 | randomly sampled training dataset with the same size as `validation_2020` from `validation_all` |
| test_coling2022_random | 3399 | random split used in the COLING 2022 paper |
| train_coling2022_random | 3598 | random split used in the COLING 2022 paper |
| test_coling2022 | 3399 | temporal split used in the COLING 2022 paper |
| train_coling2022 | 3598 | temporal split used in the COLING 2022 paper |
For the temporal-shift setting, model should be trained on `train_2020` with `validation_2020` and evaluate on `test_2021`.
In general, model would be trained on `train_all`, the most representative training set with `validation_2021` and evaluate on `test_2021`.
**IMPORTANT NOTE:** To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use `train_coling2022` and `test_coling2022` for temporal-shift, and `train_coling2022_random` and `test_coling2022_random` fir random split (the coling2022 split does not have validation set).
### Models
| model | training data | F1 | F1 (macro) | Accuracy |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------|---------:|-------------:|-----------:|
| [cardiffnlp/roberta-large-tweet-topic-single-all](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-single-all) | all (2020 + 2021) | 0.896043 | 0.800061 | 0.896043 |
| [cardiffnlp/roberta-base-tweet-topic-single-all](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-single-all) | all (2020 + 2021) | 0.887773 | 0.79793 | 0.887773 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-all) | all (2020 + 2021) | 0.892499 | 0.774494 | 0.892499 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-all) | all (2020 + 2021) | 0.890136 | 0.776025 | 0.890136 |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-all](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-all) | all (2020 + 2021) | 0.894861 | 0.800952 | 0.894861 |
| [cardiffnlp/roberta-large-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/roberta-large-tweet-topic-single-2020) | 2020 only | 0.878913 | 0.70565 | 0.878913 |
| [cardiffnlp/roberta-base-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/roberta-base-tweet-topic-single-2020) | 2020 only | 0.868281 | 0.729667 | 0.868281 |
| [cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m-tweet-topic-single-2020) | 2020 only | 0.882457 | 0.740187 | 0.882457 |
| [cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2020-tweet-topic-single-2020) | 2020 only | 0.87596 | 0.746275 | 0.87596 |
| [cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020) | 2020 only | 0.877732 | 0.746119 | 0.877732 |
Model fine-tuning script can be found [here](https://huggingface.co/datasets/cardiffnlp/tweet_topic_single/blob/main/lm_finetuning.py).
## Dataset Structure
### Data Instances
An example of `train` looks as follows.
```python
{
"text": "Game day for {{USERNAME}} U18\u2019s against {{USERNAME}} U18\u2019s. Even though it\u2019s a \u2018home\u2019 game for the people that have settled in Mid Wales it\u2019s still a 4 hour round trip for us up to Colwyn Bay. Still enjoy it though!",
"date": "2019-09-08",
"label": 4,
"id": "1170606779568463874",
"label_name": "sports_&_gaming"
}
```
### Label ID
The label2id dictionary can be found at [here](https://huggingface.co/datasets/tner/tweet_topic_single/raw/main/dataset/label.single.json).
```python
{
"arts_&_culture": 0,
"business_&_entrepreneurs": 1,
"pop_culture": 2,
"daily_life": 3,
"sports_&_gaming": 4,
"science_&_technology": 5
}
```
### Citation Information
```
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
``` | cardiffnlp/tweet_topic_single | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:1k<10K",
"language:en",
"license:other",
"arxiv:2209.09824",
"region:us"
] | 2022-09-01T23:20:17+00:00 | {"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "size_categories": ["1k<10K"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "TweetTopicSingle"} | 2022-11-27T11:25:34+00:00 | [
"2209.09824"
] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2209.09824 #region-us
| Dataset Card for "cardiffnlp/tweet\_topic\_single"
==================================================
Dataset Description
-------------------
* Paper: URL
* Dataset: Tweet Topic Dataset
* Domain: Twitter
* Number of Class: 6
### Dataset Summary
This is the official repository of TweetTopic ("Twitter Topic Classification
, COLING main conference 2022"), a topic classification dataset on Twitter with 6 labels.
Each instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.
See cardiffnlp/tweet\_topic\_multi for multi label version of TweetTopic.
The tweet collection used in TweetTopic is same as what used in TweetNER7.
The dataset is integrated in TweetNLP too.
### Preprocessing
We pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.
For verified usernames, we replace its display name (or account name) with symbols '{@}'.
For example, a tweet
is transformed into the following text.
A simple function to format tweet follows below.
### Data Splits
For the temporal-shift setting, model should be trained on 'train\_2020' with 'validation\_2020' and evaluate on 'test\_2021'.
In general, model would be trained on 'train\_all', the most representative training set with 'validation\_2021' and evaluate on 'test\_2021'.
IMPORTANT NOTE: To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use 'train\_coling2022' and 'test\_coling2022' for temporal-shift, and 'train\_coling2022\_random' and 'test\_coling2022\_random' fir random split (the coling2022 split does not have validation set).
### Models
Model fine-tuning script can be found here.
Dataset Structure
-----------------
### Data Instances
An example of 'train' looks as follows.
### Label ID
The label2id dictionary can be found at here.
| [
"### Dataset Summary\n\n\nThis is the official repository of TweetTopic (\"Twitter Topic Classification\n, COLING main conference 2022\"), a topic classification dataset on Twitter with 6 labels.\nEach instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.\nSee cardiffnlp/tweet\\_topic\\_multi for multi label version of TweetTopic.\nThe tweet collection used in TweetTopic is same as what used in TweetNER7.\nThe dataset is integrated in TweetNLP too.",
"### Preprocessing\n\n\nWe pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.\nFor verified usernames, we replace its display name (or account name) with symbols '{@}'.\nFor example, a tweet\n\n\nis transformed into the following text.\n\n\nA simple function to format tweet follows below.",
"### Data Splits\n\n\n\nFor the temporal-shift setting, model should be trained on 'train\\_2020' with 'validation\\_2020' and evaluate on 'test\\_2021'.\nIn general, model would be trained on 'train\\_all', the most representative training set with 'validation\\_2021' and evaluate on 'test\\_2021'.\n\n\nIMPORTANT NOTE: To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use 'train\\_coling2022' and 'test\\_coling2022' for temporal-shift, and 'train\\_coling2022\\_random' and 'test\\_coling2022\\_random' fir random split (the coling2022 split does not have validation set).",
"### Models\n\n\n\nModel fine-tuning script can be found here.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here."
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2209.09824 #region-us \n",
"### Dataset Summary\n\n\nThis is the official repository of TweetTopic (\"Twitter Topic Classification\n, COLING main conference 2022\"), a topic classification dataset on Twitter with 6 labels.\nEach instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.\nSee cardiffnlp/tweet\\_topic\\_multi for multi label version of TweetTopic.\nThe tweet collection used in TweetTopic is same as what used in TweetNER7.\nThe dataset is integrated in TweetNLP too.",
"### Preprocessing\n\n\nWe pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.\nFor verified usernames, we replace its display name (or account name) with symbols '{@}'.\nFor example, a tweet\n\n\nis transformed into the following text.\n\n\nA simple function to format tweet follows below.",
"### Data Splits\n\n\n\nFor the temporal-shift setting, model should be trained on 'train\\_2020' with 'validation\\_2020' and evaluate on 'test\\_2021'.\nIn general, model would be trained on 'train\\_all', the most representative training set with 'validation\\_2021' and evaluate on 'test\\_2021'.\n\n\nIMPORTANT NOTE: To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use 'train\\_coling2022' and 'test\\_coling2022' for temporal-shift, and 'train\\_coling2022\\_random' and 'test\\_coling2022\\_random' fir random split (the coling2022 split does not have validation set).",
"### Models\n\n\n\nModel fine-tuning script can be found here.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nAn example of 'train' looks as follows.",
"### Label ID\n\n\nThe label2id dictionary can be found at here."
] | [
64,
117,
103,
181,
22,
18,
17
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-1k<10K #language-English #license-other #arxiv-2209.09824 #region-us \n### Dataset Summary\n\n\nThis is the official repository of TweetTopic (\"Twitter Topic Classification\n, COLING main conference 2022\"), a topic classification dataset on Twitter with 6 labels.\nEach instance of TweetTopic comes with a timestamp which distributes from September 2019 to August 2021.\nSee cardiffnlp/tweet\\_topic\\_multi for multi label version of TweetTopic.\nThe tweet collection used in TweetTopic is same as what used in TweetNER7.\nThe dataset is integrated in TweetNLP too.### Preprocessing\n\n\nWe pre-process tweets before the annotation to normalize some artifacts, converting URLs into a special token '{{URL}}' and non-verified usernames into '{{USERNAME}}'.\nFor verified usernames, we replace its display name (or account name) with symbols '{@}'.\nFor example, a tweet\n\n\nis transformed into the following text.\n\n\nA simple function to format tweet follows below.### Data Splits\n\n\n\nFor the temporal-shift setting, model should be trained on 'train\\_2020' with 'validation\\_2020' and evaluate on 'test\\_2021'.\nIn general, model would be trained on 'train\\_all', the most representative training set with 'validation\\_2021' and evaluate on 'test\\_2021'.\n\n\nIMPORTANT NOTE: To get a result that is comparable with the results of the COLING 2022 Tweet Topic paper, please use 'train\\_coling2022' and 'test\\_coling2022' for temporal-shift, and 'train\\_coling2022\\_random' and 'test\\_coling2022\\_random' fir random split (the coling2022 split does not have validation set).### Models\n\n\n\nModel fine-tuning script can be found here.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nAn example of 'train' looks as follows."
] |
8b820b74765bc3a114dd3d1cbb344ed857bef73b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-xsum-9-6
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Rohil](https://huggingface.co/Rohil) for evaluating this model. | autoevaluate/autoeval-staging-eval-xsum-default-21f5cd-15036097 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-02T08:24:44+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "sshleifer/distilbart-xsum-9-6", "metrics": ["accuracy"], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-09-02T08:46:38+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: sshleifer/distilbart-xsum-9-6
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @Rohil for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-xsum-9-6\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Rohil for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-xsum-9-6\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @Rohil for evaluating this model."
] | [
13,
88,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: sshleifer/distilbart-xsum-9-6\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @Rohil for evaluating this model."
] |
7779c1f5ce465390fae18cef176c52cd371e8618 | # Unormalized AMI
```python
from datasets import load_dataset
ami = load_dataset("speech-seq2seq/ami", "ihm")
```
## TODO(PVP) - explain exactly what normalization was accepted what wasn't | speech-seq2seq/ami | [
"region:us"
] | 2022-09-02T09:47:53+00:00 | {} | 2022-09-06T22:03:11+00:00 | [] | [] | TAGS
#region-us
| # Unormalized AMI
## TODO(PVP) - explain exactly what normalization was accepted what wasn't | [
"# Unormalized AMI",
"## TODO(PVP) - explain exactly what normalization was accepted what wasn't"
] | [
"TAGS\n#region-us \n",
"# Unormalized AMI",
"## TODO(PVP) - explain exactly what normalization was accepted what wasn't"
] | [
6,
6,
20
] | [
"passage: TAGS\n#region-us \n# Unormalized AMI## TODO(PVP) - explain exactly what normalization was accepted what wasn't"
] |
ccc566cd8230464f03b0d045958aef0d4b98398d |
# Dataset Card for AIDS
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://wiki.nci.nih.gov/display/NCIDTPdata/AIDS+Antiviral+Screen+Data)**
- **Paper:**: (see citation)
- **Leaderboard:**: [Papers with code leaderboard](https://paperswithcode.com/sota/graph-classification-on-aids)
### Dataset Summary
The `AIDS` dataset is a dataset containing compounds checked for evidence of anti-HIV activity..
### Supported Tasks and Leaderboards
`AIDS` should be used for molecular classification, a binary classification task. The score used is accuracy with cross validation.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | medium |
| #graphs | 1999 |
| average #nodes | 15.5875 |
| average #edges | 32.39 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@InProceedings{10.1007/978-3-540-89689-0_33,
author="Riesen, Kaspar
and Bunke, Horst",
editor="da Vitoria Lobo, Niels
and Kasparis, Takis
and Roli, Fabio
and Kwok, James T.
and Georgiopoulos, Michael
and Anagnostopoulos, Georgios C.
and Loog, Marco",
title="IAM Graph Database Repository for Graph Based Pattern Recognition and Machine Learning",
booktitle="Structural, Syntactic, and Statistical Pattern Recognition",
year="2008",
publisher="Springer Berlin Heidelberg",
address="Berlin, Heidelberg",
pages="287--297",
abstract="In recent years the use of graph based representation has gained popularity in pattern recognition and machine learning. As a matter of fact, object representation by means of graphs has a number of advantages over feature vectors. Therefore, various algorithms for graph based machine learning have been proposed in the literature. However, in contrast with the emerging interest in graph based representation, a lack of standardized graph data sets for benchmarking can be observed. Common practice is that researchers use their own data sets, and this behavior cumbers the objective evaluation of the proposed methods. In order to make the different approaches in graph based machine learning better comparable, the present paper aims at introducing a repository of graph data sets and corresponding benchmarks, covering a wide spectrum of different applications.",
isbn="978-3-540-89689-0"
}
``` | graphs-datasets/AIDS | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | 2022-09-02T09:51:25+00:00 | {"task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:38:52+00:00 | [
"2007.08663"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #region-us
| Dataset Card for AIDS
=====================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
* Leaderboard:: Papers with code leaderboard
### Dataset Summary
The 'AIDS' dataset is a dataset containing compounds checked for evidence of anti-HIV activity..
### Supported Tasks and Leaderboards
'AIDS' should be used for molecular classification, a binary classification task. The score used is accuracy with cross validation.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license unknown.
| [
"### Dataset Summary\n\n\nThe 'AIDS' dataset is a dataset containing compounds checked for evidence of anti-HIV activity..",
"### Supported Tasks and Leaderboards\n\n\n'AIDS' should be used for molecular classification, a binary classification task. The score used is accuracy with cross validation.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n",
"### Dataset Summary\n\n\nThe 'AIDS' dataset is a dataset containing compounds checked for evidence of anti-HIV activity..",
"### Supported Tasks and Leaderboards\n\n\n'AIDS' should be used for molecular classification, a binary classification task. The score used is accuracy with cross validation.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
25,
32,
45,
25,
4,
158,
41,
18
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n### Dataset Summary\n\n\nThe 'AIDS' dataset is a dataset containing compounds checked for evidence of anti-HIV activity..### Supported Tasks and Leaderboards\n\n\n'AIDS' should be used for molecular classification, a binary classification task. The score used is accuracy with cross validation.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license unknown."
] |
d1caecd9c7c2f81ee392349d0f0fdf5512dd1b26 |
# Dataset Card for alchemy
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](https://alchemy.tencent.com/)**
- **Paper:**: (see citation)
- **Leaderboard:**: [Leaderboard](https://alchemy.tencent.com/)
### Dataset Summary
The `alchemy` dataset is a molecular dataset, called Alchemy, which lists 12 quantum mechanical properties of 130,000+ organic molecules comprising up to 12 heavy atoms (C, N, O, S, F and Cl), sampled from the GDBMedChem database.
### Supported Tasks and Leaderboards
`alchemy` should be used for organic quantum molecular property prediction, a regression task on 12 properties. The score used is MAE.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 202578 |
| average #nodes | 10.101387606810183 |
| average #edges | 20.877326870011206 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license mit.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@article{DBLP:journals/corr/abs-1906-09427,
author = {Guangyong Chen and
Pengfei Chen and
Chang{-}Yu Hsieh and
Chee{-}Kong Lee and
Benben Liao and
Renjie Liao and
Weiwen Liu and
Jiezhong Qiu and
Qiming Sun and
Jie Tang and
Richard S. Zemel and
Shengyu Zhang},
title = {Alchemy: {A} Quantum Chemistry Dataset for Benchmarking {AI} Models},
journal = {CoRR},
volume = {abs/1906.09427},
year = {2019},
url = {http://arxiv.org/abs/1906.09427},
eprinttype = {arXiv},
eprint = {1906.09427},
timestamp = {Mon, 11 Nov 2019 12:55:11 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1906-09427.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | graphs-datasets/alchemy | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"arxiv:1906.09427",
"region:us"
] | 2022-09-02T10:08:39+00:00 | {"task_categories": ["graph-ml"], "licence": "mit"} | 2023-02-07T16:38:45+00:00 | [
"2007.08663",
"1906.09427"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #arxiv-1906.09427 #region-us
| Dataset Card for alchemy
========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
* Leaderboard:: Leaderboard
### Dataset Summary
The 'alchemy' dataset is a molecular dataset, called Alchemy, which lists 12 quantum mechanical properties of 130,000+ organic molecules comprising up to 12 heavy atoms (C, N, O, S, F and Cl), sampled from the GDBMedChem database.
### Supported Tasks and Leaderboards
'alchemy' should be used for organic quantum molecular property prediction, a regression task on 12 properties. The score used is MAE.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license mit.
| [
"### Dataset Summary\n\n\nThe 'alchemy' dataset is a molecular dataset, called Alchemy, which lists 12 quantum mechanical properties of 130,000+ organic molecules comprising up to 12 heavy atoms (C, N, O, S, F and Cl), sampled from the GDBMedChem database.",
"### Supported Tasks and Leaderboards\n\n\n'alchemy' should be used for organic quantum molecular property prediction, a regression task on 12 properties. The score used is MAE.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license mit."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #arxiv-1906.09427 #region-us \n",
"### Dataset Summary\n\n\nThe 'alchemy' dataset is a molecular dataset, called Alchemy, which lists 12 quantum mechanical properties of 130,000+ organic molecules comprising up to 12 heavy atoms (C, N, O, S, F and Cl), sampled from the GDBMedChem database.",
"### Supported Tasks and Leaderboards\n\n\n'alchemy' should be used for organic quantum molecular property prediction, a regression task on 12 properties. The score used is MAE.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license mit."
] | [
34,
73,
46,
25,
4,
158,
41,
16
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #arxiv-1906.09427 #region-us \n### Dataset Summary\n\n\nThe 'alchemy' dataset is a molecular dataset, called Alchemy, which lists 12 quantum mechanical properties of 130,000+ organic molecules comprising up to 12 heavy atoms (C, N, O, S, F and Cl), sampled from the GDBMedChem database.### Supported Tasks and Leaderboards\n\n\n'alchemy' should be used for organic quantum molecular property prediction, a regression task on 12 properties. The score used is MAE.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: 1 x #labels): contains the number of labels available to predict (here 1, equal to zero or one)\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license mit."
] |
4e808c91a6645b849e607e953196ea97f08d111e |
# Dataset Card for aspirin
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](http://www.sgdml.org/#datasets)**
- **Paper:**: (see citation)
### Dataset Summary
The `aspirin` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
`aspirin` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the full set
dataset_pg_list = [Data(graph) for graph in dataset_hf["full"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 111762 |
| average #nodes | 21.0 |
| average #edges | 303.0447106824262 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@article{Chmiela_2017,
doi = {10.1126/sciadv.1603015},
url = {https://doi.org/10.1126%2Fsciadv.1603015},
year = 2017,
month = {may},
publisher = {American Association for the Advancement of Science ({AAAS})},
volume = {3},
number = {5},
author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller},
title = {Machine learning of accurate energy-conserving molecular force fields},
journal = {Science Advances}
}
``` | graphs-datasets/MD17-aspirin | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | 2022-09-02T10:24:39+00:00 | {"task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:38:29+00:00 | [
"2007.08663"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #region-us
| Dataset Card for aspirin
========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
### Dataset Summary
The 'aspirin' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
'aspirin' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: #labels): contains the number of labels available to predict
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license unknown.
| [
"### Dataset Summary\n\n\nThe 'aspirin' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'aspirin' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n",
"### Dataset Summary\n\n\nThe 'aspirin' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'aspirin' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
25,
80,
57,
25,
4,
146,
41,
18
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n### Dataset Summary\n\n\nThe 'aspirin' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.### Supported Tasks and Leaderboards\n\n\n'aspirin' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license unknown."
] |
2c4c5d74bb0492becb3a3aa6a7f4f0a5493c1220 |
# Dataset Card for benzene
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](http://www.sgdml.org/#datasets)**
- **Paper:**: (see citation)
### Dataset Summary
The `benzene` dataset is molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
`benzene` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 527983 |
| average #nodes | 12.0 |
| average #edges | 129.8848866632322 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@article{Chmiela_2017,
doi = {10.1126/sciadv.1603015},
url = {https://doi.org/10.1126%2Fsciadv.1603015},
year = 2017,
month = {may},
publisher = {American Association for the Advancement of Science ({AAAS})},
volume = {3},
number = {5},
author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller},
title = {Machine learning of accurate energy-conserving molecular force fields},
journal = {Science Advances}
}
``` | graphs-datasets/MD17-benzene | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | 2022-09-02T10:28:47+00:00 | {"task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:38:21+00:00 | [
"2007.08663"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #region-us
| Dataset Card for benzene
========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
### Dataset Summary
The 'benzene' dataset is molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
'benzene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: #labels): contains the number of labels available to predict
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license unknown.
| [
"### Dataset Summary\n\n\nThe 'benzene' dataset is molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'benzene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n",
"### Dataset Summary\n\n\nThe 'benzene' dataset is molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'benzene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
25,
78,
56,
25,
4,
146,
41,
18
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n### Dataset Summary\n\n\nThe 'benzene' dataset is molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.### Supported Tasks and Leaderboards\n\n\n'benzene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license unknown."
] |
9435372f87fea2f32c41e31237400884a38c7830 |
# Dataset Card for ethanol
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](http://www.sgdml.org/#datasets)**
- **Paper:**: (see citation)
### Dataset Summary
The `ethanol` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
`ethanol` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 455092 |
| average #nodes | 9.0 |
| average #edges | 72.0 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
Please cite both papers when using these datasets in publications.
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@article{Chmiela_2017,
doi = {10.1126/sciadv.1603015},
url = {https://doi.org/10.1126%2Fsciadv.1603015},
year = 2017,
month = {may},
publisher = {American Association for the Advancement of Science ({AAAS})},
volume = {3},
number = {5},
author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller},
title = {Machine learning of accurate energy-conserving molecular force fields},
journal = {Science Advances}
}
``` | graphs-datasets/MD17-ethanol | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | 2022-09-02T10:35:08+00:00 | {"task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:35:52+00:00 | [
"2007.08663"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #region-us
| Dataset Card for ethanol
========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
### Dataset Summary
The 'ethanol' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
'ethanol' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: #labels): contains the number of labels available to predict
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license unknown.
Please cite both papers when using these datasets in publications.
| [
"### Dataset Summary\n\n\nThe 'ethanol' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'ethanol' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown.\n\n\nPlease cite both papers when using these datasets in publications."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n",
"### Dataset Summary\n\n\nThe 'ethanol' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'ethanol' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown.\n\n\nPlease cite both papers when using these datasets in publications."
] | [
25,
80,
57,
25,
4,
146,
41,
33
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n### Dataset Summary\n\n\nThe 'ethanol' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.### Supported Tasks and Leaderboards\n\n\n'ethanol' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license unknown.\n\n\nPlease cite both papers when using these datasets in publications."
] |
8825653ea5739fd0e81f07ac8b5e7eb943f3a2b2 |
# Dataset Card for malonaldehyde
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](http://www.sgdml.org/#datasets)**
- **Paper:**: (see citation)
### Dataset Summary
The `malonaldehyde` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
`malonaldehyde` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 893237 |
| average #nodes | 9.0 |
| average #edges | 71.99990148202383 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@article{Chmiela_2017,
doi = {10.1126/sciadv.1603015},
url = {https://doi.org/10.1126%2Fsciadv.1603015},
year = 2017,
month = {may},
publisher = {American Association for the Advancement of Science ({AAAS})},
volume = {3},
number = {5},
author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller},
title = {Machine learning of accurate energy-conserving molecular force fields},
journal = {Science Advances}
}
``` | graphs-datasets/MD17-malonaldehyde | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | 2022-09-02T10:39:54+00:00 | {"task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:37:48+00:00 | [
"2007.08663"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #region-us
| Dataset Card for malonaldehyde
==============================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
### Dataset Summary
The 'malonaldehyde' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
'malonaldehyde' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: #labels): contains the number of labels available to predict
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license unknown.
| [
"### Dataset Summary\n\n\nThe 'malonaldehyde' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'malonaldehyde' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n",
"### Dataset Summary\n\n\nThe 'malonaldehyde' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'malonaldehyde' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
25,
82,
59,
25,
4,
146,
41,
18
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n### Dataset Summary\n\n\nThe 'malonaldehyde' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.### Supported Tasks and Leaderboards\n\n\n'malonaldehyde' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license unknown."
] |
bd27d0058bea2ad52470d9072a3b5da6b97c1ac3 |
# Dataset Card for VaccinChatNL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
<!-- - [Curation Rationale](#curation-rationale) -->
<!-- - [Source Data](#source-data) -->
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
<!-- - [Social Impact of Dataset](#social-impact-of-dataset) -->
- [Discussion of Biases](#discussion-of-biases)
<!-- - [Other Known Limitations](#other-known-limitations) -->
- [Additional Information](#additional-information)
<!-- - [Dataset Curators](#dataset-curators) -->
<!-- - [Licensing Information](#licensing-information) -->
- [Citation Information](#citation-information)
<!-- - [Contributions](#contributions) -->
## Dataset Description
<!-- - **Homepage:**
- **Repository:**
- **Paper:** [To be added]
- **Leaderboard:** -->
- **Point of Contact:** [Jeska Buhmann](mailto:[email protected])
### Dataset Summary
VaccinChatNL is a Flemish Dutch FAQ dataset on the topic of COVID-19 vaccinations in Flanders. It consists of 12,833 user questions divided over 181 answer labels, thus providing large groups of semantically equivalent paraphrases (a many-to-one mapping of user questions to answer labels). VaccinChatNL is the first Dutch many-to-one FAQ dataset of this size.
### Supported Tasks and Leaderboards
- 'text-classification': the dataset can be used to train a classification model for Dutch frequently asked questions on the topic of COVID-19 vaccination in Flanders.
### Languages
Dutch (Flemish): the BCP-47 code for Dutch as generally spoken in Flanders (Belgium) is nl-BE.
## Dataset Structure
### Data Instances
For each instance, there is a string for the user question and a string for the label of the annotated answer. See the [CLiPS / VaccinChatNL dataset viewer](https://huggingface.co/datasets/clips/VaccinChatNL/viewer/clips--VaccinChatNL/train).
```
{"sentence1": "Waar kan ik de bijsluiters van de vaccins vinden?", "label": "faq_ask_bijsluiter"}
```
### Data Fields
- `sentence1`: a string containing the user question
- `label`: a string containing the name of the intent (the answer class)
### Data Splits
The VaccinChatNL dataset has 3 splits: _train_, _valid_, and _test_. Below are the statistics for the dataset.
| Dataset Split | Number of Labeled User Questions in Split |
| ------------- | ------------------------------------------ |
| Train | 10,542 |
| Validation | 1,171 |
| Test | 1,170 |
## Dataset Creation
<!-- ### Curation Rationale
[More Information Needed] -->
<!-- ### Source Data
[Perhaps a link to vaccinchat.be and some of the website that were used for information] -->
<!-- #### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed] -->
### Annotations
#### Annotation process
Annotation was an iterative semi-automatic process. Starting from a very limited dataset with approximately 50 question-answer pairs (_sentence1-label_ pairs) a text classification model was trained and implemented in a publicly available chatbot. When the chatbot was used, the predicted labels for the new questions were checked and corrected if necessary. In addition, new answers were added to the dataset. After each round of corrections, the model was retrained on the updated dataset. This iterative approach led to the final dataset containing 12,883 user questions divided over 181 answer labels.
#### Who are the annotators?
The VaccinChatNL data were annotated by members and students of [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/). All annotators have a background in Computational Linguistics.
### Personal and Sensitive Information
The data are anonymized in the sense that a user question can never be traced back to a specific individual.
## Considerations for Using the Data
<!-- ### Social Impact of Dataset
[More Information Needed] -->
### Discussion of Biases
This dataset contains real user questions, including a rather large section (7%) of out-of-domain questions or remarks (_label: nlu_fallback_). This class of user questions consists of ununderstandable questions, but also jokes and insulting remarks.
<!-- ### Other Known Limitations
[Perhaps some information of % of exact overlap between train and test set] -->
## Additional Information
<!-- ### Dataset Curators
[More Information Needed] -->
<!-- ### Licensing Information
[More Information Needed] -->
### Citation Information
```
@inproceedings{buhmann-etal-2022-domain,
title = "Domain- and Task-Adaptation for {V}accin{C}hat{NL}, a {D}utch {COVID}-19 {FAQ} Answering Corpus and Classification Model",
author = "Buhmann, Jeska and De Bruyn, Maxime and Lotfi, Ehsan and Daelemans, Walter",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.312",
pages = "3539--3549"
}
```
<!-- ### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. -->
| clips/VaccinChatNL | [
"task_categories:text-classification",
"task_ids:intent-classification",
"annotations_creators:expert-generated",
"language_creators:other",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:nl",
"license:cc-by-4.0",
"covid-19",
"FAQ",
"question-answer pairs",
"region:us"
] | 2022-09-02T10:52:00+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["other"], "language": ["nl"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["intent-classification"], "pretty_name": "VaccinChatNL", "tags": ["covid-19", "FAQ", "question-answer pairs"]} | 2023-03-21T15:22:36+00:00 | [] | [
"nl"
] | TAGS
#task_categories-text-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Dutch #license-cc-by-4.0 #covid-19 #FAQ #question-answer pairs #region-us
| Dataset Card for VaccinChatNL
=============================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Discussion of Biases
* Additional Information
+ Citation Information
Dataset Description
-------------------
* Point of Contact: Jeska Buhmann
### Dataset Summary
VaccinChatNL is a Flemish Dutch FAQ dataset on the topic of COVID-19 vaccinations in Flanders. It consists of 12,833 user questions divided over 181 answer labels, thus providing large groups of semantically equivalent paraphrases (a many-to-one mapping of user questions to answer labels). VaccinChatNL is the first Dutch many-to-one FAQ dataset of this size.
### Supported Tasks and Leaderboards
* 'text-classification': the dataset can be used to train a classification model for Dutch frequently asked questions on the topic of COVID-19 vaccination in Flanders.
### Languages
Dutch (Flemish): the BCP-47 code for Dutch as generally spoken in Flanders (Belgium) is nl-BE.
Dataset Structure
-----------------
### Data Instances
For each instance, there is a string for the user question and a string for the label of the annotated answer. See the CLiPS / VaccinChatNL dataset viewer.
### Data Fields
* 'sentence1': a string containing the user question
* 'label': a string containing the name of the intent (the answer class)
### Data Splits
The VaccinChatNL dataset has 3 splits: *train*, *valid*, and *test*. Below are the statistics for the dataset.
Dataset Creation
----------------
### Annotations
#### Annotation process
Annotation was an iterative semi-automatic process. Starting from a very limited dataset with approximately 50 question-answer pairs (*sentence1-label* pairs) a text classification model was trained and implemented in a publicly available chatbot. When the chatbot was used, the predicted labels for the new questions were checked and corrected if necessary. In addition, new answers were added to the dataset. After each round of corrections, the model was retrained on the updated dataset. This iterative approach led to the final dataset containing 12,883 user questions divided over 181 answer labels.
#### Who are the annotators?
The VaccinChatNL data were annotated by members and students of CLiPS. All annotators have a background in Computational Linguistics.
### Personal and Sensitive Information
The data are anonymized in the sense that a user question can never be traced back to a specific individual.
Considerations for Using the Data
---------------------------------
### Discussion of Biases
This dataset contains real user questions, including a rather large section (7%) of out-of-domain questions or remarks (*label: nlu\_fallback*). This class of user questions consists of ununderstandable questions, but also jokes and insulting remarks.
Additional Information
----------------------
| [
"### Dataset Summary\n\n\nVaccinChatNL is a Flemish Dutch FAQ dataset on the topic of COVID-19 vaccinations in Flanders. It consists of 12,833 user questions divided over 181 answer labels, thus providing large groups of semantically equivalent paraphrases (a many-to-one mapping of user questions to answer labels). VaccinChatNL is the first Dutch many-to-one FAQ dataset of this size.",
"### Supported Tasks and Leaderboards\n\n\n* 'text-classification': the dataset can be used to train a classification model for Dutch frequently asked questions on the topic of COVID-19 vaccination in Flanders.",
"### Languages\n\n\nDutch (Flemish): the BCP-47 code for Dutch as generally spoken in Flanders (Belgium) is nl-BE.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is a string for the user question and a string for the label of the annotated answer. See the CLiPS / VaccinChatNL dataset viewer.",
"### Data Fields\n\n\n* 'sentence1': a string containing the user question\n* 'label': a string containing the name of the intent (the answer class)",
"### Data Splits\n\n\nThe VaccinChatNL dataset has 3 splits: *train*, *valid*, and *test*. Below are the statistics for the dataset.\n\n\n\nDataset Creation\n----------------",
"### Annotations",
"#### Annotation process\n\n\nAnnotation was an iterative semi-automatic process. Starting from a very limited dataset with approximately 50 question-answer pairs (*sentence1-label* pairs) a text classification model was trained and implemented in a publicly available chatbot. When the chatbot was used, the predicted labels for the new questions were checked and corrected if necessary. In addition, new answers were added to the dataset. After each round of corrections, the model was retrained on the updated dataset. This iterative approach led to the final dataset containing 12,883 user questions divided over 181 answer labels.",
"#### Who are the annotators?\n\n\nThe VaccinChatNL data were annotated by members and students of CLiPS. All annotators have a background in Computational Linguistics.",
"### Personal and Sensitive Information\n\n\nThe data are anonymized in the sense that a user question can never be traced back to a specific individual.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Discussion of Biases\n\n\nThis dataset contains real user questions, including a rather large section (7%) of out-of-domain questions or remarks (*label: nlu\\_fallback*). This class of user questions consists of ununderstandable questions, but also jokes and insulting remarks.\n\n\nAdditional Information\n----------------------"
] | [
"TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Dutch #license-cc-by-4.0 #covid-19 #FAQ #question-answer pairs #region-us \n",
"### Dataset Summary\n\n\nVaccinChatNL is a Flemish Dutch FAQ dataset on the topic of COVID-19 vaccinations in Flanders. It consists of 12,833 user questions divided over 181 answer labels, thus providing large groups of semantically equivalent paraphrases (a many-to-one mapping of user questions to answer labels). VaccinChatNL is the first Dutch many-to-one FAQ dataset of this size.",
"### Supported Tasks and Leaderboards\n\n\n* 'text-classification': the dataset can be used to train a classification model for Dutch frequently asked questions on the topic of COVID-19 vaccination in Flanders.",
"### Languages\n\n\nDutch (Flemish): the BCP-47 code for Dutch as generally spoken in Flanders (Belgium) is nl-BE.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nFor each instance, there is a string for the user question and a string for the label of the annotated answer. See the CLiPS / VaccinChatNL dataset viewer.",
"### Data Fields\n\n\n* 'sentence1': a string containing the user question\n* 'label': a string containing the name of the intent (the answer class)",
"### Data Splits\n\n\nThe VaccinChatNL dataset has 3 splits: *train*, *valid*, and *test*. Below are the statistics for the dataset.\n\n\n\nDataset Creation\n----------------",
"### Annotations",
"#### Annotation process\n\n\nAnnotation was an iterative semi-automatic process. Starting from a very limited dataset with approximately 50 question-answer pairs (*sentence1-label* pairs) a text classification model was trained and implemented in a publicly available chatbot. When the chatbot was used, the predicted labels for the new questions were checked and corrected if necessary. In addition, new answers were added to the dataset. After each round of corrections, the model was retrained on the updated dataset. This iterative approach led to the final dataset containing 12,883 user questions divided over 181 answer labels.",
"#### Who are the annotators?\n\n\nThe VaccinChatNL data were annotated by members and students of CLiPS. All annotators have a background in Computational Linguistics.",
"### Personal and Sensitive Information\n\n\nThe data are anonymized in the sense that a user question can never be traced back to a specific individual.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Discussion of Biases\n\n\nThis dataset contains real user questions, including a rather large section (7%) of out-of-domain questions or remarks (*label: nlu\\_fallback*). This class of user questions consists of ununderstandable questions, but also jokes and insulting remarks.\n\n\nAdditional Information\n----------------------"
] | [
106,
100,
49,
43,
46,
38,
49,
5,
147,
44,
41,
78
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-intent-classification #annotations_creators-expert-generated #language_creators-other #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Dutch #license-cc-by-4.0 #covid-19 #FAQ #question-answer pairs #region-us \n### Dataset Summary\n\n\nVaccinChatNL is a Flemish Dutch FAQ dataset on the topic of COVID-19 vaccinations in Flanders. It consists of 12,833 user questions divided over 181 answer labels, thus providing large groups of semantically equivalent paraphrases (a many-to-one mapping of user questions to answer labels). VaccinChatNL is the first Dutch many-to-one FAQ dataset of this size.### Supported Tasks and Leaderboards\n\n\n* 'text-classification': the dataset can be used to train a classification model for Dutch frequently asked questions on the topic of COVID-19 vaccination in Flanders.### Languages\n\n\nDutch (Flemish): the BCP-47 code for Dutch as generally spoken in Flanders (Belgium) is nl-BE.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nFor each instance, there is a string for the user question and a string for the label of the annotated answer. See the CLiPS / VaccinChatNL dataset viewer.### Data Fields\n\n\n* 'sentence1': a string containing the user question\n* 'label': a string containing the name of the intent (the answer class)### Data Splits\n\n\nThe VaccinChatNL dataset has 3 splits: *train*, *valid*, and *test*. Below are the statistics for the dataset.\n\n\n\nDataset Creation\n----------------### Annotations"
] |
797ddc673b956eeaa235a6a372e2a29f105e20ba |
# Dataset Card for naphthalene
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](http://www.sgdml.org/#datasets)**
- **Paper:**: (see citation)
### Dataset Summary
The `naphthalene` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
`naphthalene` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 226255 |
| average #nodes | 18.0 |
| average #edges | 254.73246234354005 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@article{Chmiela_2017,
doi = {10.1126/sciadv.1603015},
url = {https://doi.org/10.1126%2Fsciadv.1603015},
year = 2017,
month = {may},
publisher = {American Association for the Advancement of Science ({AAAS})},
volume = {3},
number = {5},
author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller},
title = {Machine learning of accurate energy-conserving molecular force fields},
journal = {Science Advances}
}
``` | graphs-datasets/MD17-naphthalene | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | 2022-09-02T10:54:00+00:00 | {"task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:38:13+00:00 | [
"2007.08663"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #region-us
| Dataset Card for naphthalene
============================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
### Dataset Summary
The 'naphthalene' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
'naphthalene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: #labels): contains the number of labels available to predict
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license unknown.
| [
"### Dataset Summary\n\n\nThe 'naphthalene' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'naphthalene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n",
"### Dataset Summary\n\n\nThe 'naphthalene' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'naphthalene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
25,
81,
58,
25,
4,
146,
41,
18
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n### Dataset Summary\n\n\nThe 'naphthalene' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.### Supported Tasks and Leaderboards\n\n\n'naphthalene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license unknown."
] |
6311b15ea2069f1726abc865e486c3f7e7977f39 |
# Dataset Card for salicylic_acid
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](http://www.sgdml.org/#datasets)**
- **Paper:**: (see citation)
### Dataset Summary
The `salicylic_acid` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
`salicylic_acid` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 220231 |
| average #nodes | 16.0 |
| average #edges | 208.2681717461586 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@article{Chmiela_2017,
doi = {10.1126/sciadv.1603015},
url = {https://doi.org/10.1126%2Fsciadv.1603015},
year = 2017,
month = {may},
publisher = {American Association for the Advancement of Science ({AAAS})},
volume = {3},
number = {5},
author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller},
title = {Machine learning of accurate energy-conserving molecular force fields},
journal = {Science Advances}
}
``` | graphs-datasets/MD17-salicylic_acid | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | 2022-09-02T11:07:48+00:00 | {"task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:37:57+00:00 | [
"2007.08663"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #region-us
| Dataset Card for salicylic\_acid
================================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
### Dataset Summary
The 'salicylic\_acid' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
'salicylic\_acid' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: #labels): contains the number of labels available to predict
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license unknown.
| [
"### Dataset Summary\n\n\nThe 'salicylic\\_acid' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'salicylic\\_acid' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n",
"### Dataset Summary\n\n\nThe 'salicylic\\_acid' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'salicylic\\_acid' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
25,
84,
61,
25,
4,
146,
41,
18
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n### Dataset Summary\n\n\nThe 'salicylic\\_acid' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.### Supported Tasks and Leaderboards\n\n\n'salicylic\\_acid' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license unknown."
] |
02aabb462c01b362f4deee43ff294cf171bb7daf |
# Dataset Card for toluene
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](http://www.sgdml.org/#datasets)**
- **Paper:**: (see citation)
### Dataset Summary
The `toluene` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
`toluene` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 342790 |
| average #nodes | 15.0 |
| average #edges | 192.30698588936116 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@article{Chmiela_2017,
doi = {10.1126/sciadv.1603015},
url = {https://doi.org/10.1126%2Fsciadv.1603015},
year = 2017,
month = {may},
publisher = {American Association for the Advancement of Science ({AAAS})},
volume = {3},
number = {5},
author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller},
title = {Machine learning of accurate energy-conserving molecular force fields},
journal = {Science Advances}
}
``` | graphs-datasets/MD17-toluene | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | 2022-09-02T11:12:43+00:00 | {"task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:38:05+00:00 | [
"2007.08663"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #region-us
| Dataset Card for toluene
========================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
### Dataset Summary
The 'toluene' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
'toluene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: #labels): contains the number of labels available to predict
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license unknown.
| [
"### Dataset Summary\n\n\nThe 'toluene' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'toluene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n",
"### Dataset Summary\n\n\nThe 'toluene' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'toluene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
25,
80,
57,
25,
4,
146,
41,
18
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n### Dataset Summary\n\n\nThe 'toluene' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.### Supported Tasks and Leaderboards\n\n\n'toluene' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license unknown."
] |
d4da9a780efd59e60bc2887bb69e2953cfb9b4db |
# Dataset Card for uracil
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Dataset Structure](#dataset-structure)
- [Data Properties](#data-properties)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **[Homepage](http://www.sgdml.org/#datasets)**
- **Paper:**: (see citation)
### Dataset Summary
The `uracil` dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
`uracil` should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
## External Use
### PyGeometric
To load in PyGeometric, do the following:
```python
from datasets import load_dataset
from torch_geometric.data import Data
from torch_geometric.loader import DataLoader
dataset_hf = load_dataset("graphs-datasets/<mydataset>")
# For the train set (replace by valid or test as needed)
dataset_pg_list = [Data(graph) for graph in dataset_hf["train"]]
dataset_pg = DataLoader(dataset_pg_list)
```
## Dataset Structure
### Data Properties
| property | value |
|---|---|
| scale | big |
| #graphs | 133769 |
| average #nodes | 12.0 |
| average #edges | 128.88676085818943 |
### Data Fields
Each row of a given file is a graph, with:
- `node_feat` (list: #nodes x #node-features): nodes
- `edge_index` (list: 2 x #edges): pairs of nodes constituting edges
- `edge_attr` (list: #edges x #edge-features): for the aforementioned edges, contains their features
- `y` (list: #labels): contains the number of labels available to predict
- `num_nodes` (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
## Additional Information
### Licensing Information
The dataset has been released under license unknown.
### Citation Information
```
@inproceedings{Morris+2020,
title={TUDataset: A collection of benchmark datasets for learning with graphs},
author={Christopher Morris and Nils M. Kriege and Franka Bause and Kristian Kersting and Petra Mutzel and Marion Neumann},
booktitle={ICML 2020 Workshop on Graph Representation Learning and Beyond (GRL+ 2020)},
archivePrefix={arXiv},
eprint={2007.08663},
url={www.graphlearning.io},
year={2020}
}
```
```
@article{Chmiela_2017,
doi = {10.1126/sciadv.1603015},
url = {https://doi.org/10.1126%2Fsciadv.1603015},
year = 2017,
month = {may},
publisher = {American Association for the Advancement of Science ({AAAS})},
volume = {3},
number = {5},
author = {Stefan Chmiela and Alexandre Tkatchenko and Huziel E. Sauceda and Igor Poltavsky and Kristof T. Schütt and Klaus-Robert Müller},
title = {Machine learning of accurate energy-conserving molecular force fields},
journal = {Science Advances}
}
``` | graphs-datasets/MD17-uracil | [
"task_categories:graph-ml",
"arxiv:2007.08663",
"region:us"
] | 2022-09-02T11:14:39+00:00 | {"task_categories": ["graph-ml"], "licence": "unknown"} | 2023-02-07T16:37:39+00:00 | [
"2007.08663"
] | [] | TAGS
#task_categories-graph-ml #arxiv-2007.08663 #region-us
| Dataset Card for uracil
=======================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
* External Use
+ PyGeometric
* Dataset Structure
+ Data Properties
+ Data Fields
+ Data Splits
* Additional Information
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage
* Paper:: (see citation)
### Dataset Summary
The 'uracil' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.
### Supported Tasks and Leaderboards
'uracil' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.
External Use
------------
### PyGeometric
To load in PyGeometric, do the following:
Dataset Structure
-----------------
### Data Properties
### Data Fields
Each row of a given file is a graph, with:
* 'node\_feat' (list: #nodes x #node-features): nodes
* 'edge\_index' (list: 2 x #edges): pairs of nodes constituting edges
* 'edge\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features
* 'y' (list: #labels): contains the number of labels available to predict
* 'num\_nodes' (int): number of nodes of the graph
### Data Splits
This data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.
Additional Information
----------------------
### Licensing Information
The dataset has been released under license unknown.
| [
"### Dataset Summary\n\n\nThe 'uracil' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'uracil' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
"TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n",
"### Dataset Summary\n\n\nThe 'uracil' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.",
"### Supported Tasks and Leaderboards\n\n\n'uracil' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------",
"### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------",
"### Data Properties",
"### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph",
"### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nThe dataset has been released under license unknown."
] | [
25,
79,
56,
25,
4,
146,
41,
18
] | [
"passage: TAGS\n#task_categories-graph-ml #arxiv-2007.08663 #region-us \n### Dataset Summary\n\n\nThe 'uracil' dataset is a molecular dynamics (MD) dataset. The total energy and force labels for each dataset were computed using the PBE+vdW-TS electronic structure method. All geometries are in Angstrom, energies and forces are given in kcal/mol and kcal/mol/A respectively.### Supported Tasks and Leaderboards\n\n\n'uracil' should be used for organic molecular property prediction, a regression task on 1 property. The score used is Mean absolute errors (in meV) for energy prediction.\n\n\nExternal Use\n------------### PyGeometric\n\n\nTo load in PyGeometric, do the following:\n\n\nDataset Structure\n-----------------### Data Properties### Data Fields\n\n\nEach row of a given file is a graph, with:\n\n\n* 'node\\_feat' (list: #nodes x #node-features): nodes\n* 'edge\\_index' (list: 2 x #edges): pairs of nodes constituting edges\n* 'edge\\_attr' (list: #edges x #edge-features): for the aforementioned edges, contains their features\n* 'y' (list: #labels): contains the number of labels available to predict\n* 'num\\_nodes' (int): number of nodes of the graph### Data Splits\n\n\nThis data is not split, and should be used with cross validation. It comes from the PyGeometric version of the dataset.\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nThe dataset has been released under license unknown."
] |
b838e714070f32045d057422f620a88bd9689c43 |
This repo contains converted ECMWF ERA5 reanalysis files for both hourly atmospheric and land variables from Jan 2014 to October 2022. The data has been converted from the downloaded NetCDF files into Zarr using Xarray. Each file is 1 day of reanalysis, and so has 24 timesteps at a 0.25 degree grid resolution. All variables in the reanalysis are included here. | openclimatefix/era5-reanalysis | [
"license:mit",
"region:us"
] | 2022-09-02T11:37:58+00:00 | {"license": "mit"} | 2022-12-01T15:18:54+00:00 | [] | [] | TAGS
#license-mit #region-us
|
This repo contains converted ECMWF ERA5 reanalysis files for both hourly atmospheric and land variables from Jan 2014 to October 2022. The data has been converted from the downloaded NetCDF files into Zarr using Xarray. Each file is 1 day of reanalysis, and so has 24 timesteps at a 0.25 degree grid resolution. All variables in the reanalysis are included here. | [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
029592ccdb7eae9bd59cb40f0c0b2c665148b2b2 |
# transformers metrics
This dataset contains metrics about the huggingface/transformers package.
Number of repositories in the dataset: 27067
Number of packages in the dataset: 823
## Package dependents
This contains the data available in the [used-by](https://github.com/huggingface/transformers/network/dependents)
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 65 packages that have more than 1000 stars.
There are 140 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
[hankcs/HanLP](https://github.com/hankcs/HanLP): 26958
[fastai/fastai](https://github.com/fastai/fastai): 22774
[slundberg/shap](https://github.com/slundberg/shap): 17482
[fastai/fastbook](https://github.com/fastai/fastbook): 16052
[jina-ai/jina](https://github.com/jina-ai/jina): 16052
[huggingface/datasets](https://github.com/huggingface/datasets): 14101
[microsoft/recommenders](https://github.com/microsoft/recommenders): 14017
[borisdayma/dalle-mini](https://github.com/borisdayma/dalle-mini): 12872
[flairNLP/flair](https://github.com/flairNLP/flair): 12033
[allenai/allennlp](https://github.com/allenai/allennlp): 11198
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 70487
[hankcs/HanLP](https://github.com/hankcs/HanLP): 26959
[ageron/handson-ml2](https://github.com/ageron/handson-ml2): 22886
[ray-project/ray](https://github.com/ray-project/ray): 22047
[jina-ai/jina](https://github.com/jina-ai/jina): 16052
[RasaHQ/rasa](https://github.com/RasaHQ/rasa): 14844
[microsoft/recommenders](https://github.com/microsoft/recommenders): 14017
[deeplearning4j/deeplearning4j](https://github.com/deeplearning4j/deeplearning4j): 12617
[flairNLP/flair](https://github.com/flairNLP/flair): 12034
[allenai/allennlp](https://github.com/allenai/allennlp): 11198
### Package & Repository fork count
This section shows the package and repository fork count, individually.
Package | Repository
:-------------------------:|:-------------------------:
 | 
There are 55 packages that have more than 200 forks.
There are 128 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
[hankcs/HanLP](https://github.com/hankcs/HanLP): 7388
[fastai/fastai](https://github.com/fastai/fastai): 7297
[fastai/fastbook](https://github.com/fastai/fastbook): 6033
[slundberg/shap](https://github.com/slundberg/shap): 2646
[microsoft/recommenders](https://github.com/microsoft/recommenders): 2473
[allenai/allennlp](https://github.com/allenai/allennlp): 2218
[jina-ai/clip-as-service](https://github.com/jina-ai/clip-as-service): 1972
[jina-ai/jina](https://github.com/jina-ai/jina): 1967
[flairNLP/flair](https://github.com/flairNLP/flair): 1934
[huggingface/datasets](https://github.com/huggingface/datasets): 1841
*Repository*
[huggingface/transformers](https://github.com/huggingface/transformers): 16159
[ageron/handson-ml2](https://github.com/ageron/handson-ml2): 11053
[hankcs/HanLP](https://github.com/hankcs/HanLP): 7389
[aws/amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples): 5493
[deeplearning4j/deeplearning4j](https://github.com/deeplearning4j/deeplearning4j): 4933
[RasaHQ/rasa](https://github.com/RasaHQ/rasa): 4106
[ray-project/ray](https://github.com/ray-project/ray): 3876
[apache/beam](https://github.com/apache/beam): 3648
[plotly/dash-sample-apps](https://github.com/plotly/dash-sample-apps): 2795
[microsoft/recommenders](https://github.com/microsoft/recommenders): 2473
| open-source-metrics/transformers-dependents | [
"license:apache-2.0",
"github-stars",
"region:us"
] | 2022-09-02T12:05:00+00:00 | {"license": "apache-2.0", "pretty_name": "transformers metrics", "tags": ["github-stars"]} | 2024-02-17T02:33:56+00:00 | [] | [] | TAGS
#license-apache-2.0 #github-stars #region-us
| transformers metrics
====================
This dataset contains metrics about the huggingface/transformers package.
Number of repositories in the dataset: 27067
Number of packages in the dataset: 823
Package dependents
------------------
This contains the data available in the used-by
tab on GitHub.
### Package & Repository star count
This section shows the package and repository star count, individually.
There are 65 packages that have more than 1000 stars.
There are 140 repositories that have more than 1000 stars.
The top 10 in each category are the following:
*Package*
hankcs/HanLP: 26958
fastai/fastai: 22774
slundberg/shap: 17482
fastai/fastbook: 16052
jina-ai/jina: 16052
huggingface/datasets: 14101
microsoft/recommenders: 14017
borisdayma/dalle-mini: 12872
flairNLP/flair: 12033
allenai/allennlp: 11198
*Repository*
huggingface/transformers: 70487
hankcs/HanLP: 26959
ageron/handson-ml2: 22886
ray-project/ray: 22047
jina-ai/jina: 16052
RasaHQ/rasa: 14844
microsoft/recommenders: 14017
deeplearning4j/deeplearning4j: 12617
flairNLP/flair: 12034
allenai/allennlp: 11198
### Package & Repository fork count
This section shows the package and repository fork count, individually.
There are 55 packages that have more than 200 forks.
There are 128 repositories that have more than 200 forks.
The top 10 in each category are the following:
*Package*
hankcs/HanLP: 7388
fastai/fastai: 7297
fastai/fastbook: 6033
slundberg/shap: 2646
microsoft/recommenders: 2473
allenai/allennlp: 2218
jina-ai/clip-as-service: 1972
jina-ai/jina: 1967
flairNLP/flair: 1934
huggingface/datasets: 1841
*Repository*
huggingface/transformers: 16159
ageron/handson-ml2: 11053
hankcs/HanLP: 7389
aws/amazon-sagemaker-examples: 5493
deeplearning4j/deeplearning4j: 4933
RasaHQ/rasa: 4106
ray-project/ray: 3876
apache/beam: 3648
plotly/dash-sample-apps: 2795
microsoft/recommenders: 2473
| [
"### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 65 packages that have more than 1000 stars.\n\n\nThere are 140 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhankcs/HanLP: 26958\n\n\nfastai/fastai: 22774\n\n\nslundberg/shap: 17482\n\n\nfastai/fastbook: 16052\n\n\njina-ai/jina: 16052\n\n\nhuggingface/datasets: 14101\n\n\nmicrosoft/recommenders: 14017\n\n\nborisdayma/dalle-mini: 12872\n\n\nflairNLP/flair: 12033\n\n\nallenai/allennlp: 11198\n\n\n*Repository*\n\n\nhuggingface/transformers: 70487\n\n\nhankcs/HanLP: 26959\n\n\nageron/handson-ml2: 22886\n\n\nray-project/ray: 22047\n\n\njina-ai/jina: 16052\n\n\nRasaHQ/rasa: 14844\n\n\nmicrosoft/recommenders: 14017\n\n\ndeeplearning4j/deeplearning4j: 12617\n\n\nflairNLP/flair: 12034\n\n\nallenai/allennlp: 11198",
"### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 55 packages that have more than 200 forks.\n\n\nThere are 128 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhankcs/HanLP: 7388\n\n\nfastai/fastai: 7297\n\n\nfastai/fastbook: 6033\n\n\nslundberg/shap: 2646\n\n\nmicrosoft/recommenders: 2473\n\n\nallenai/allennlp: 2218\n\n\njina-ai/clip-as-service: 1972\n\n\njina-ai/jina: 1967\n\n\nflairNLP/flair: 1934\n\n\nhuggingface/datasets: 1841\n\n\n*Repository*\n\n\nhuggingface/transformers: 16159\n\n\nageron/handson-ml2: 11053\n\n\nhankcs/HanLP: 7389\n\n\naws/amazon-sagemaker-examples: 5493\n\n\ndeeplearning4j/deeplearning4j: 4933\n\n\nRasaHQ/rasa: 4106\n\n\nray-project/ray: 3876\n\n\napache/beam: 3648\n\n\nplotly/dash-sample-apps: 2795\n\n\nmicrosoft/recommenders: 2473"
] | [
"TAGS\n#license-apache-2.0 #github-stars #region-us \n",
"### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 65 packages that have more than 1000 stars.\n\n\nThere are 140 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhankcs/HanLP: 26958\n\n\nfastai/fastai: 22774\n\n\nslundberg/shap: 17482\n\n\nfastai/fastbook: 16052\n\n\njina-ai/jina: 16052\n\n\nhuggingface/datasets: 14101\n\n\nmicrosoft/recommenders: 14017\n\n\nborisdayma/dalle-mini: 12872\n\n\nflairNLP/flair: 12033\n\n\nallenai/allennlp: 11198\n\n\n*Repository*\n\n\nhuggingface/transformers: 70487\n\n\nhankcs/HanLP: 26959\n\n\nageron/handson-ml2: 22886\n\n\nray-project/ray: 22047\n\n\njina-ai/jina: 16052\n\n\nRasaHQ/rasa: 14844\n\n\nmicrosoft/recommenders: 14017\n\n\ndeeplearning4j/deeplearning4j: 12617\n\n\nflairNLP/flair: 12034\n\n\nallenai/allennlp: 11198",
"### Package & Repository fork count\n\n\nThis section shows the package and repository fork count, individually.\n\n\n\nThere are 55 packages that have more than 200 forks.\n\n\nThere are 128 repositories that have more than 200 forks.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhankcs/HanLP: 7388\n\n\nfastai/fastai: 7297\n\n\nfastai/fastbook: 6033\n\n\nslundberg/shap: 2646\n\n\nmicrosoft/recommenders: 2473\n\n\nallenai/allennlp: 2218\n\n\njina-ai/clip-as-service: 1972\n\n\njina-ai/jina: 1967\n\n\nflairNLP/flair: 1934\n\n\nhuggingface/datasets: 1841\n\n\n*Repository*\n\n\nhuggingface/transformers: 16159\n\n\nageron/handson-ml2: 11053\n\n\nhankcs/HanLP: 7389\n\n\naws/amazon-sagemaker-examples: 5493\n\n\ndeeplearning4j/deeplearning4j: 4933\n\n\nRasaHQ/rasa: 4106\n\n\nray-project/ray: 3876\n\n\napache/beam: 3648\n\n\nplotly/dash-sample-apps: 2795\n\n\nmicrosoft/recommenders: 2473"
] | [
20,
261,
267
] | [
"passage: TAGS\n#license-apache-2.0 #github-stars #region-us \n### Package & Repository star count\n\n\nThis section shows the package and repository star count, individually.\n\n\n\nThere are 65 packages that have more than 1000 stars.\n\n\nThere are 140 repositories that have more than 1000 stars.\n\n\nThe top 10 in each category are the following:\n\n\n*Package*\n\n\nhankcs/HanLP: 26958\n\n\nfastai/fastai: 22774\n\n\nslundberg/shap: 17482\n\n\nfastai/fastbook: 16052\n\n\njina-ai/jina: 16052\n\n\nhuggingface/datasets: 14101\n\n\nmicrosoft/recommenders: 14017\n\n\nborisdayma/dalle-mini: 12872\n\n\nflairNLP/flair: 12033\n\n\nallenai/allennlp: 11198\n\n\n*Repository*\n\n\nhuggingface/transformers: 70487\n\n\nhankcs/HanLP: 26959\n\n\nageron/handson-ml2: 22886\n\n\nray-project/ray: 22047\n\n\njina-ai/jina: 16052\n\n\nRasaHQ/rasa: 14844\n\n\nmicrosoft/recommenders: 14017\n\n\ndeeplearning4j/deeplearning4j: 12617\n\n\nflairNLP/flair: 12034\n\n\nallenai/allennlp: 11198"
] |
499bfa2c7cd0923311f8f2c4b24c5ffe462db922 | # AutoTrain Dataset for project: dog-classifiers
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project dog-classifiers.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<474x592 RGB PIL image>",
"target": 1
},
{
"image": "<474x296 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(num_classes=5, names=['akita inu', 'corgi', 'leonberger', 'samoyed', 'shiba inu'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 598 |
| valid | 150 |
| julien-c/autotrain-data-dog-classifiers | [
"task_categories:image-classification",
"region:us"
] | 2022-09-02T14:21:11+00:00 | {"task_categories": ["image-classification"]} | 2022-09-02T15:13:38+00:00 | [] | [] | TAGS
#task_categories-image-classification #region-us
| AutoTrain Dataset for project: dog-classifiers
==============================================
Dataset Descritpion
-------------------
This dataset has been automatically processed by AutoTrain for project dog-classifiers.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
17,
27,
17,
23,
27
] | [
"passage: TAGS\n#task_categories-image-classification #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
61c35ebc14a9aec260ece1cb8061d3997663ea37 |
# STT-2 Spanish
## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [SST-2 Dataset](https://huggingface.co/datasets/sst2)
#### For more information check the official [Model Card](https://huggingface.co/datasets/sst2) | mrm8488/sst2-es-mt | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:sst2",
"language:es",
"license:unknown",
"region:us"
] | 2022-09-02T19:28:50+00:00 | {"language": ["es"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["sst2"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Stanford Sentiment Treebank v2"} | 2022-09-03T15:41:42+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-sst2 #language-Spanish #license-unknown #region-us
|
# STT-2 Spanish
## A Spanish translation (using EasyNMT) of the SST-2 Dataset
#### For more information check the official Model Card | [
"# STT-2 Spanish",
"## A Spanish translation (using EasyNMT) of the SST-2 Dataset",
"#### For more information check the official Model Card"
] | [
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-sst2 #language-Spanish #license-unknown #region-us \n",
"# STT-2 Spanish",
"## A Spanish translation (using EasyNMT) of the SST-2 Dataset",
"#### For more information check the official Model Card"
] | [
70,
5,
18,
10
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-sst2 #language-Spanish #license-unknown #region-us \n# STT-2 Spanish## A Spanish translation (using EasyNMT) of the SST-2 Dataset#### For more information check the official Model Card"
] |
f881ecdb455e1ef7b7e70164df594a98ddf3424e | # GoEmotions Spanish
## A Spanish translation (using [EasyNMT](https://github.com/UKPLab/EasyNMT)) of the [GoEmotions](https://huggingface.co/datasets/sst2) dataset.
#### For more information check the official [Model Card](https://huggingface.co/datasets/go_emotions) | mrm8488/go_emotions-es-mt | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:multi-label-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"size_categories:10K<n<100K",
"source_datasets:go_emotions",
"language:es",
"license:apache-2.0",
"emotion",
"region:us"
] | 2022-09-02T19:59:52+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["found"], "language": ["es"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M", "10K<n<100K"], "source_datasets": ["go_emotions"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "multi-label-classification"], "pretty_name": "GoEmotions", "tags": ["emotion"]} | 2022-10-20T18:23:36+00:00 | [] | [
"es"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-go_emotions #language-Spanish #license-apache-2.0 #emotion #region-us
| # GoEmotions Spanish
## A Spanish translation (using EasyNMT) of the GoEmotions dataset.
#### For more information check the official Model Card | [
"# GoEmotions Spanish",
"## A Spanish translation (using EasyNMT) of the GoEmotions dataset.",
"#### For more information check the official Model Card"
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-go_emotions #language-Spanish #license-apache-2.0 #emotion #region-us \n",
"# GoEmotions Spanish",
"## A Spanish translation (using EasyNMT) of the GoEmotions dataset.",
"#### For more information check the official Model Card"
] | [
121,
6,
20,
10
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #size_categories-10K<n<100K #source_datasets-go_emotions #language-Spanish #license-apache-2.0 #emotion #region-us \n# GoEmotions Spanish## A Spanish translation (using EasyNMT) of the GoEmotions dataset.#### For more information check the official Model Card"
] |
21747468e4ffa56f4d4352d1cac863e46ca6b68f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@pszemraj](https://huggingface.co/pszemraj) for evaluating this model. | autoevaluate/autoeval-staging-eval-billsum-default-6d3727-15406134 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-02T22:05:06+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["billsum"], "eval_info": {"task": "summarization", "model": "pszemraj/led-large-book-summary", "metrics": [], "dataset_name": "billsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "text", "target": "summary"}}} | 2022-09-03T14:34:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: pszemraj/led-large-book-summary
* Dataset: billsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @pszemraj for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @pszemraj for evaluating this model."
] | [
13,
88,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: pszemraj/led-large-book-summary\n* Dataset: billsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @pszemraj for evaluating this model."
] |
0bb175d32c10b0d335b2b6c845f63669f7f7cc41 |
### dataset description
We downloaded open-reaction-database(ORD) dataset from [here](https://github.com/open-reaction-database/ord-data). As a preprocess, we removed overlapping data and canonicalized them using RDKit.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
```python:
from rdkit import Chem
def canonicalize(mol):
mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True)
return mol
```
We randomly split the preprocessed data into train, validation and test. The ratio is 8:1:1. | sagawa/ord-uniq-canonicalized | [
"task_categories:text2text-generation",
"task_categories:translation",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"license:apache-2.0",
"ord",
"chemical",
"reaction",
"region:us"
] | 2022-09-03T03:28:23+00:00 | {"annotations_creators": [], "language_creators": [], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "canonicalized ORD", "tags": ["ord", "chemical", "reaction"]} | 2022-09-04T01:41:10+00:00 | [] | [] | TAGS
#task_categories-text2text-generation #task_categories-translation #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #license-apache-2.0 #ord #chemical #reaction #region-us
|
### dataset description
We downloaded open-reaction-database(ORD) dataset from here. As a preprocess, we removed overlapping data and canonicalized them using RDKit.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
We randomly split the preprocessed data into train, validation and test. The ratio is 8:1:1. | [
"### dataset description\nWe downloaded open-reaction-database(ORD) dataset from here. As a preprocess, we removed overlapping data and canonicalized them using RDKit.\nWe used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.\n \n\n\nWe randomly split the preprocessed data into train, validation and test. The ratio is 8:1:1."
] | [
"TAGS\n#task_categories-text2text-generation #task_categories-translation #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #license-apache-2.0 #ord #chemical #reaction #region-us \n",
"### dataset description\nWe downloaded open-reaction-database(ORD) dataset from here. As a preprocess, we removed overlapping data and canonicalized them using RDKit.\nWe used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.\n \n\n\nWe randomly split the preprocessed data into train, validation and test. The ratio is 8:1:1."
] | [
72,
94
] | [
"passage: TAGS\n#task_categories-text2text-generation #task_categories-translation #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #license-apache-2.0 #ord #chemical #reaction #region-us \n### dataset description\nWe downloaded open-reaction-database(ORD) dataset from here. As a preprocess, we removed overlapping data and canonicalized them using RDKit.\nWe used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.\n \n\n\nWe randomly split the preprocessed data into train, validation and test. The ratio is 8:1:1."
] |
f83219601635a0a80fc99c13a9ca37f99ef34f0a |
### dataset description
We downloaded PubChem-10m dataset from [here](https://deepchemdata.s3-us-west-1.amazonaws.com/datasets/pubchem_10m.txt.zip) and canonicalized it.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
```python:
from rdkit import Chem
def canonicalize(mol):
mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True)
return mol
```
We randomly split the preprocessed data into train and validation. The ratio is 9 : 1. | sagawa/pubchem-10m-canonicalized | [
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"license:apache-2.0",
"PubChem",
"chemical",
"SMILES",
"region:us"
] | 2022-09-03T04:35:49+00:00 | {"annotations_creators": [], "language_creators": ["expert-generated"], "language": [], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "canonicalized PubChem-10m", "tags": ["PubChem", "chemical", "SMILES"]} | 2022-09-04T01:18:37+00:00 | [] | [] | TAGS
#language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #license-apache-2.0 #PubChem #chemical #SMILES #region-us
|
### dataset description
We downloaded PubChem-10m dataset from here and canonicalized it.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
We randomly split the preprocessed data into train and validation. The ratio is 9 : 1. | [
"### dataset description\nWe downloaded PubChem-10m dataset from here and canonicalized it.\nWe used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.\n \n\n\nWe randomly split the preprocessed data into train and validation. The ratio is 9 : 1."
] | [
"TAGS\n#language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #license-apache-2.0 #PubChem #chemical #SMILES #region-us \n",
"### dataset description\nWe downloaded PubChem-10m dataset from here and canonicalized it.\nWe used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.\n \n\n\nWe randomly split the preprocessed data into train and validation. The ratio is 9 : 1."
] | [
64,
70
] | [
"passage: TAGS\n#language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #license-apache-2.0 #PubChem #chemical #SMILES #region-us \n### dataset description\nWe downloaded PubChem-10m dataset from here and canonicalized it.\nWe used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.\n \n\n\nWe randomly split the preprocessed data into train and validation. The ratio is 9 : 1."
] |
5497e797c551617bc1d94a859e4f3429f3d0b32d |
### dataset description
We downloaded ZINC dataset from [here](https://zinc15.docking.org/) and canonicalized it.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
```python:
from rdkit import Chem
def canonicalize(mol):
mol = Chem.MolToSmiles(Chem.MolFromSmiles(mol),True)
return mol
```
We randomly split the preprocessed data into train and validation. The ratio is 9 : 1. | sagawa/ZINC-canonicalized | [
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10M<n<100M",
"source_datasets:original",
"license:apache-2.0",
"ZINC",
"chemical",
"SMILES",
"region:us"
] | 2022-09-03T05:01:18+00:00 | {"annotations_creators": [], "language_creators": ["expert-generated"], "language": [], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "canonicalized ZINC", "tags": ["ZINC", "chemical", "SMILES"]} | 2022-09-04T01:21:08+00:00 | [] | [] | TAGS
#language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #license-apache-2.0 #ZINC #chemical #SMILES #region-us
|
### dataset description
We downloaded ZINC dataset from here and canonicalized it.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
We randomly split the preprocessed data into train and validation. The ratio is 9 : 1. | [
"### dataset description\nWe downloaded ZINC dataset from here and canonicalized it.\nWe used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.\n \n\n\nWe randomly split the preprocessed data into train and validation. The ratio is 9 : 1."
] | [
"TAGS\n#language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #license-apache-2.0 #ZINC #chemical #SMILES #region-us \n",
"### dataset description\nWe downloaded ZINC dataset from here and canonicalized it.\nWe used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.\n \n\n\nWe randomly split the preprocessed data into train and validation. The ratio is 9 : 1."
] | [
63,
68
] | [
"passage: TAGS\n#language_creators-expert-generated #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #license-apache-2.0 #ZINC #chemical #SMILES #region-us \n### dataset description\nWe downloaded ZINC dataset from here and canonicalized it.\nWe used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.\n \n\n\nWe randomly split the preprocessed data into train and validation. The ratio is 9 : 1."
] |
0b533459841603d5e5c20c41291bc8c981c49546 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: navsad/navid_test_bert
* Dataset: glue
* Config: cola
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@yooo](https://huggingface.co/yooo) for evaluating this model. | autoevaluate/autoeval-staging-eval-glue-cola-42256f-15426136 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-03T12:50:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "navsad/navid_test_bert", "metrics": [], "dataset_name": "glue", "dataset_config": "cola", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-09-03T12:50:56+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: navsad/navid_test_bert
* Dataset: glue
* Config: cola
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @yooo for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: navsad/navid_test_bert\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yooo for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: navsad/navid_test_bert\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @yooo for evaluating this model."
] | [
13,
87,
15
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: navsad/navid_test_bert\n* Dataset: glue\n* Config: cola\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @yooo for evaluating this model."
] |
7e22c8f616d706bebd86162860feabcf1c6affc4 |
# Dataset Card for Yandex_Jobs
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
This is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values.
### Supported Tasks and Leaderboards
`text-generation` with the 'Raw text column'.
`summarization` as for getting from all the info the header.
`multiple-choice` as for the hashtags (to choose multiple from all available in the dataset)
### Languages
The text in the dataset is in only in Russian. The associated BCP-47 code is `ru`.
## Dataset Structure
### Data Instances
The data is parsed from a vacancy of Russian IT company [Yandex](https://ya.ru/).
An example from the set looks as follows:
```
{'Header': 'Разработчик интерфейсов в группу разработки спецпроектов',
'Emoji': '🎳',
'Description': 'Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.\nМы ищем опытного и открытого новому фронтенд-разработчика.',
'Requirements': '• отлично знаете JavaScript
• разрабатывали на Node.js, применяли фреймворк Express
• умеете создавать веб-приложения на React + Redux
• знаете HTML и CSS, особенности их отображения в браузерах',
'Tasks': '• разрабатывать интерфейсы',
'Pluses': '• писали интеграционные, модульные, функциональные или браузерные тесты
• умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене
• работали с реляционными БД PostgreSQL',
'Hashtags': '#фронтенд #турбо #JS',
'Link': 'https://ya.cc/t/t7E3UsmVSKs6L',
'Raw text': 'Разработчик интерфейсов в группу разработки спецпроектов🎳
Конструктор лендингов — это инструмент Яндекса, который позволяет пользователям создавать лендинги и турбо-лендинги для Яндекс.Директа. Турбо — режим ускоренной загрузки страниц для показа на мобильных. У нас современный стек, смелые планы и высокая динамика.
Мы ищем опытного и открытого новому фронтенд-разработчика.
Мы ждем, что вы:
• отлично знаете JavaScript
• разрабатывали на Node.js, применяли фреймворк Express
• умеете создавать веб-приложения на React + Redux
• знаете HTML и CSS, особенности их отображения в браузерах
Что нужно делать:
• разрабатывать интерфейсы
Будет плюсом, если вы:
• писали интеграционные, модульные, функциональные или браузерные тесты
• умеете разворачивать и администрировать веб-сервисы: собирать Docker-образы, настраивать мониторинги, выкладывать в облачные системы, отлаживать в продакшене
• работали с реляционными БД PostgreSQL
https://ya.cc/t/t7E3UsmVSKs6L
#фронтенд #турбо #JS'
}
```
### Data Fields
- `Header`: A string with a position title (str)
- `Emoji`: Emoji that is used at the end of the title position (usually asosiated with the position) (str)
- `Description`: Short description of the vacancy (str)
- `Requirements`: A couple of required technologies/programming languages/experience (str)
- `Tasks`: Examples of the tasks of the job position (str)
- `Pluses`: A couple of great points for the applicant to have (technologies/experience/etc)
- `Hashtags`: A list of hashtags assosiated with the job (usually programming languages) (str)
- `Link`: A link to a job description (there may be more information, but it is not checked) (str)
- `Raw text`: Raw text with all the formatiing from the channel. Created with other fields. (str)
### Data Splits
There is not enough examples yet to split it to train/test/val in my opinion.
## Dataset Creation
It downloaded and parsed from telegram channel https://t.me/ya_jobs 03.09.2022. All the unparsed examples and the ones missing any field are deleted (from 1600 vacancies to only 600 without any missing fields like emojis or links)
## Considerations for Using the Data
These vacancies are for only one IT company (yandex). This means they can be pretty specific and probably can not be generalized as any vacancies or even any IT vacancies.
## Contributions
- **Point of Contact and Author:** [Kirill Gelvan](telegram: @kirili4ik) | Kirili4ik/yandex_jobs | [
"task_categories:text-generation",
"task_categories:summarization",
"task_categories:multiple-choice",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:n<1K",
"source_datasets:original",
"language:ru",
"license:unknown",
"vacancies",
"jobs",
"ru",
"yandex",
"region:us"
] | 2022-09-03T16:22:02+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["ru"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["n<1K"], "source_datasets": ["original"], "task_categories": ["text-generation", "summarization", "multiple-choice"], "task_ids": ["language-modeling"], "paperswithcode_id": "climate-fever", "pretty_name": "yandex_jobs", "tags": ["vacancies", "jobs", "ru", "yandex"]} | 2022-09-03T16:55:00+00:00 | [] | [
"ru"
] | TAGS
#task_categories-text-generation #task_categories-summarization #task_categories-multiple-choice #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Russian #license-unknown #vacancies #jobs #ru #yandex #region-us
|
# Dataset Card for Yandex_Jobs
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Considerations for Using the Data
- Contributions
## Dataset Description
### Dataset Summary
This is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values.
### Supported Tasks and Leaderboards
'text-generation' with the 'Raw text column'.
'summarization' as for getting from all the info the header.
'multiple-choice' as for the hashtags (to choose multiple from all available in the dataset)
### Languages
The text in the dataset is in only in Russian. The associated BCP-47 code is 'ru'.
## Dataset Structure
### Data Instances
The data is parsed from a vacancy of Russian IT company Yandex.
An example from the set looks as follows:
### Data Fields
- 'Header': A string with a position title (str)
- 'Emoji': Emoji that is used at the end of the title position (usually asosiated with the position) (str)
- 'Description': Short description of the vacancy (str)
- 'Requirements': A couple of required technologies/programming languages/experience (str)
- 'Tasks': Examples of the tasks of the job position (str)
- 'Pluses': A couple of great points for the applicant to have (technologies/experience/etc)
- 'Hashtags': A list of hashtags assosiated with the job (usually programming languages) (str)
- 'Link': A link to a job description (there may be more information, but it is not checked) (str)
- 'Raw text': Raw text with all the formatiing from the channel. Created with other fields. (str)
### Data Splits
There is not enough examples yet to split it to train/test/val in my opinion.
## Dataset Creation
It downloaded and parsed from telegram channel https://t.me/ya_jobs 03.09.2022. All the unparsed examples and the ones missing any field are deleted (from 1600 vacancies to only 600 without any missing fields like emojis or links)
## Considerations for Using the Data
These vacancies are for only one IT company (yandex). This means they can be pretty specific and probably can not be generalized as any vacancies or even any IT vacancies.
## Contributions
- Point of Contact and Author: Kirill Gelvan | [
"# Dataset Card for Yandex_Jobs",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n- Contributions",
"## Dataset Description",
"### Dataset Summary\n\nThis is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values.",
"### Supported Tasks and Leaderboards\n\n'text-generation' with the 'Raw text column'. \n\n'summarization' as for getting from all the info the header. \n\n'multiple-choice' as for the hashtags (to choose multiple from all available in the dataset)",
"### Languages\n\nThe text in the dataset is in only in Russian. The associated BCP-47 code is 'ru'.",
"## Dataset Structure",
"### Data Instances\n\nThe data is parsed from a vacancy of Russian IT company Yandex.\n\nAn example from the set looks as follows:",
"### Data Fields\n\n- 'Header': A string with a position title (str)\n- 'Emoji': Emoji that is used at the end of the title position (usually asosiated with the position) (str)\n- 'Description': Short description of the vacancy (str)\n- 'Requirements': A couple of required technologies/programming languages/experience (str)\n- 'Tasks': Examples of the tasks of the job position (str)\n- 'Pluses': A couple of great points for the applicant to have (technologies/experience/etc)\n- 'Hashtags': A list of hashtags assosiated with the job (usually programming languages) (str)\n- 'Link': A link to a job description (there may be more information, but it is not checked) (str)\n- 'Raw text': Raw text with all the formatiing from the channel. Created with other fields. (str)",
"### Data Splits\n\nThere is not enough examples yet to split it to train/test/val in my opinion.",
"## Dataset Creation\n\nIt downloaded and parsed from telegram channel https://t.me/ya_jobs 03.09.2022. All the unparsed examples and the ones missing any field are deleted (from 1600 vacancies to only 600 without any missing fields like emojis or links)",
"## Considerations for Using the Data\n\nThese vacancies are for only one IT company (yandex). This means they can be pretty specific and probably can not be generalized as any vacancies or even any IT vacancies.",
"## Contributions\n\n- Point of Contact and Author: Kirill Gelvan"
] | [
"TAGS\n#task_categories-text-generation #task_categories-summarization #task_categories-multiple-choice #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Russian #license-unknown #vacancies #jobs #ru #yandex #region-us \n",
"# Dataset Card for Yandex_Jobs",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n- Contributions",
"## Dataset Description",
"### Dataset Summary\n\nThis is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values.",
"### Supported Tasks and Leaderboards\n\n'text-generation' with the 'Raw text column'. \n\n'summarization' as for getting from all the info the header. \n\n'multiple-choice' as for the hashtags (to choose multiple from all available in the dataset)",
"### Languages\n\nThe text in the dataset is in only in Russian. The associated BCP-47 code is 'ru'.",
"## Dataset Structure",
"### Data Instances\n\nThe data is parsed from a vacancy of Russian IT company Yandex.\n\nAn example from the set looks as follows:",
"### Data Fields\n\n- 'Header': A string with a position title (str)\n- 'Emoji': Emoji that is used at the end of the title position (usually asosiated with the position) (str)\n- 'Description': Short description of the vacancy (str)\n- 'Requirements': A couple of required technologies/programming languages/experience (str)\n- 'Tasks': Examples of the tasks of the job position (str)\n- 'Pluses': A couple of great points for the applicant to have (technologies/experience/etc)\n- 'Hashtags': A list of hashtags assosiated with the job (usually programming languages) (str)\n- 'Link': A link to a job description (there may be more information, but it is not checked) (str)\n- 'Raw text': Raw text with all the formatiing from the channel. Created with other fields. (str)",
"### Data Splits\n\nThere is not enough examples yet to split it to train/test/val in my opinion.",
"## Dataset Creation\n\nIt downloaded and parsed from telegram channel https://t.me/ya_jobs 03.09.2022. All the unparsed examples and the ones missing any field are deleted (from 1600 vacancies to only 600 without any missing fields like emojis or links)",
"## Considerations for Using the Data\n\nThese vacancies are for only one IT company (yandex). This means they can be pretty specific and probably can not be generalized as any vacancies or even any IT vacancies.",
"## Contributions\n\n- Point of Contact and Author: Kirill Gelvan"
] | [
120,
10,
62,
4,
50,
67,
27,
6,
32,
225,
25,
64,
46,
15
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-summarization #task_categories-multiple-choice #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-found #multilinguality-monolingual #size_categories-n<1K #source_datasets-original #language-Russian #license-unknown #vacancies #jobs #ru #yandex #region-us \n# Dataset Card for Yandex_Jobs## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n- Considerations for Using the Data\n- Contributions## Dataset Description### Dataset Summary\n\nThis is a dataset of more than 600 IT vacancies in Russian from parsing telegram channel https://t.me/ya_jobs. All the texts are perfectly structured, no missing values.### Supported Tasks and Leaderboards\n\n'text-generation' with the 'Raw text column'. \n\n'summarization' as for getting from all the info the header. \n\n'multiple-choice' as for the hashtags (to choose multiple from all available in the dataset)### Languages\n\nThe text in the dataset is in only in Russian. The associated BCP-47 code is 'ru'.## Dataset Structure### Data Instances\n\nThe data is parsed from a vacancy of Russian IT company Yandex.\n\nAn example from the set looks as follows:"
] |
ee8774c4c8a9c7812856f14bdefecab8fe1576d3 |
### Abstract
Social tagging of movies reveals a wide range of heterogeneous information about movies, like the genre, plot structure, soundtracks, metadata, visual and emotional experiences. Such information can be valuable in building automatic systems to create tags for movies. Automatic tagging systems can help recommendation engines to improve the retrieval of similar movies as well as help viewers to know what to expect from a movie in advance. In this paper, we set out to the task of collecting a corpus of movie plot synopses and tags. We describe a methodology that enabled us to build a fine-grained set of around 70 tags exposing heterogeneous characteristics of movie plots and the multi-label associations of these tags with some 14K movie plot synopses. We investigate how these tags correlate with movies and the flow of emotions throughout different types of movies. Finally, we use this corpus to explore the feasibility of inferring tags from plot synopses. We expect the corpus will be useful in other tasks where analysis of narratives is relevant.
### Content
This dataset was first published in LREC 2018 at Miyazaki, Japan.
Please find the paper here:

Later, this dataset was enriched with user reviews. The paper is available here:

This dataset was published in EMNLP 2020.
### Keywords
Tag generation for movies, Movie plot analysis, Multi-label dataset, Narrative texts
More information is available here
http://ritual.uh.edu/mpst-2018/
Please cite the following papers if you use this dataset:
```
@InProceedings{KAR18.332,
author = {Sudipta Kar and Suraj Maharjan and A. Pastor López-Monroy and Thamar Solorio},
title = {{MPST}: A Corpus of Movie Plot Synopses with Tags},
booktitle = {Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)},
year = {2018},
month = {May},
date = {7-12},
location = {Miyazaki, Japan},
editor = {Nicoletta Calzolari (Conference chair) and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Koiti Hasida and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis and Takenobu Tokunaga},
publisher = {European Language Resources Association (ELRA)},
address = {Paris, France},
isbn = {979-10-95546-00-9},
language = {english}
}
```
```
@inproceedings{kar-etal-2020-multi,
title = "Multi-view Story Characterization from Movie Plot Synopses and Reviews",
author = "Kar, Sudipta and
Aguilar, Gustavo and
Lapata, Mirella and
Solorio, Thamar",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.emnlp-main.454",
doi = "10.18653/v1/2020.emnlp-main.454",
pages = "5629--5646",
abstract = "This paper considers the problem of characterizing stories by inferring properties such as theme and style using written synopses and reviews of movies. We experiment with a multi-label dataset of movie synopses and a tagset representing various attributes of stories (e.g., genre, type of events). Our proposed multi-view model encodes the synopses and reviews using hierarchical attention and shows improvement over methods that only use synopses. Finally, we demonstrate how we can take advantage of such a model to extract a complementary set of story-attributes from reviews without direct supervision. We have made our dataset and source code publicly available at https://ritual.uh.edu/multiview-tag-2020.",
}
```
| cryptexcode/MPST | [
"license:cc-by-4.0",
"region:us"
] | 2022-09-03T17:44:29+00:00 | {"license": "cc-by-4.0"} | 2022-09-03T19:43:00+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
### Abstract
Social tagging of movies reveals a wide range of heterogeneous information about movies, like the genre, plot structure, soundtracks, metadata, visual and emotional experiences. Such information can be valuable in building automatic systems to create tags for movies. Automatic tagging systems can help recommendation engines to improve the retrieval of similar movies as well as help viewers to know what to expect from a movie in advance. In this paper, we set out to the task of collecting a corpus of movie plot synopses and tags. We describe a methodology that enabled us to build a fine-grained set of around 70 tags exposing heterogeneous characteristics of movie plots and the multi-label associations of these tags with some 14K movie plot synopses. We investigate how these tags correlate with movies and the flow of emotions throughout different types of movies. Finally, we use this corpus to explore the feasibility of inferring tags from plot synopses. We expect the corpus will be useful in other tasks where analysis of narratives is relevant.
### Content
This dataset was first published in LREC 2018 at Miyazaki, Japan.
Please find the paper here:
!MPST: A Corpus of Movie Plot Synopses with Tags
Later, this dataset was enriched with user reviews. The paper is available here:
!Multi-view Story Characterization from Movie Plot Synopses and Reviews
This dataset was published in EMNLP 2020.
### Keywords
Tag generation for movies, Movie plot analysis, Multi-label dataset, Narrative texts
More information is available here
URL
Please cite the following papers if you use this dataset:
| [
"### Abstract \nSocial tagging of movies reveals a wide range of heterogeneous information about movies, like the genre, plot structure, soundtracks, metadata, visual and emotional experiences. Such information can be valuable in building automatic systems to create tags for movies. Automatic tagging systems can help recommendation engines to improve the retrieval of similar movies as well as help viewers to know what to expect from a movie in advance. In this paper, we set out to the task of collecting a corpus of movie plot synopses and tags. We describe a methodology that enabled us to build a fine-grained set of around 70 tags exposing heterogeneous characteristics of movie plots and the multi-label associations of these tags with some 14K movie plot synopses. We investigate how these tags correlate with movies and the flow of emotions throughout different types of movies. Finally, we use this corpus to explore the feasibility of inferring tags from plot synopses. We expect the corpus will be useful in other tasks where analysis of narratives is relevant.",
"### Content\nThis dataset was first published in LREC 2018 at Miyazaki, Japan.\nPlease find the paper here:\n!MPST: A Corpus of Movie Plot Synopses with Tags\n\n\n\nLater, this dataset was enriched with user reviews. The paper is available here:\n!Multi-view Story Characterization from Movie Plot Synopses and Reviews\nThis dataset was published in EMNLP 2020.",
"### Keywords\nTag generation for movies, Movie plot analysis, Multi-label dataset, Narrative texts\n\nMore information is available here\nURL\n\nPlease cite the following papers if you use this dataset:"
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"### Abstract \nSocial tagging of movies reveals a wide range of heterogeneous information about movies, like the genre, plot structure, soundtracks, metadata, visual and emotional experiences. Such information can be valuable in building automatic systems to create tags for movies. Automatic tagging systems can help recommendation engines to improve the retrieval of similar movies as well as help viewers to know what to expect from a movie in advance. In this paper, we set out to the task of collecting a corpus of movie plot synopses and tags. We describe a methodology that enabled us to build a fine-grained set of around 70 tags exposing heterogeneous characteristics of movie plots and the multi-label associations of these tags with some 14K movie plot synopses. We investigate how these tags correlate with movies and the flow of emotions throughout different types of movies. Finally, we use this corpus to explore the feasibility of inferring tags from plot synopses. We expect the corpus will be useful in other tasks where analysis of narratives is relevant.",
"### Content\nThis dataset was first published in LREC 2018 at Miyazaki, Japan.\nPlease find the paper here:\n!MPST: A Corpus of Movie Plot Synopses with Tags\n\n\n\nLater, this dataset was enriched with user reviews. The paper is available here:\n!Multi-view Story Characterization from Movie Plot Synopses and Reviews\nThis dataset was published in EMNLP 2020.",
"### Keywords\nTag generation for movies, Movie plot analysis, Multi-label dataset, Narrative texts\n\nMore information is available here\nURL\n\nPlease cite the following papers if you use this dataset:"
] | [
15,
229,
88,
43
] | [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n### Abstract \nSocial tagging of movies reveals a wide range of heterogeneous information about movies, like the genre, plot structure, soundtracks, metadata, visual and emotional experiences. Such information can be valuable in building automatic systems to create tags for movies. Automatic tagging systems can help recommendation engines to improve the retrieval of similar movies as well as help viewers to know what to expect from a movie in advance. In this paper, we set out to the task of collecting a corpus of movie plot synopses and tags. We describe a methodology that enabled us to build a fine-grained set of around 70 tags exposing heterogeneous characteristics of movie plots and the multi-label associations of these tags with some 14K movie plot synopses. We investigate how these tags correlate with movies and the flow of emotions throughout different types of movies. Finally, we use this corpus to explore the feasibility of inferring tags from plot synopses. We expect the corpus will be useful in other tasks where analysis of narratives is relevant.### Content\nThis dataset was first published in LREC 2018 at Miyazaki, Japan.\nPlease find the paper here:\n!MPST: A Corpus of Movie Plot Synopses with Tags\n\n\n\nLater, this dataset was enriched with user reviews. The paper is available here:\n!Multi-view Story Characterization from Movie Plot Synopses and Reviews\nThis dataset was published in EMNLP 2020.### Keywords\nTag generation for movies, Movie plot analysis, Multi-label dataset, Narrative texts\n\nMore information is available here\nURL\n\nPlease cite the following papers if you use this dataset:"
] |
cc63b218e0ec1fd354b4c094d3dc7be65e1a858a | # Dataset Card for "twowaydata"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | marcus2000/twowaydata | [
"region:us"
] | 2022-09-03T21:01:38+00:00 | {"dataset_info": {"features": [{"name": "0", "dtype": "string"}, {"name": "1", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26052853, "num_examples": 33014}, {"name": "validation", "num_bytes": 3144818, "num_examples": 4000}, {"name": "test", "num_bytes": 3374221, "num_examples": 4254}], "download_size": 14113023, "dataset_size": 32571892}} | 2023-02-23T19:13:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "twowaydata"
More Information needed | [
"# Dataset Card for \"twowaydata\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"twowaydata\"\n\nMore Information needed"
] | [
6,
13
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"twowaydata\"\n\nMore Information needed"
] |
0adcd5a08b689305c0dae8cf2c75c0bce419072a | # Dataset Card for LibriVox Indonesia 1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Repository:** https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
- **Point of Contact:** [Cahya Wirawan](mailto:[email protected])
### Dataset Summary
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
domain audiobooks [LibriVox](https://librivox.org/). We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports
multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it
for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files
as we collect them.
### Languages
```
Acehnese, Balinese, Bugisnese, Indonesian, Minangkabau, Javanese, Sundanese
```
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`. Additional fields include
`reader` and `language`.
```python
{
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'language': 'sun',
'reader': '3174',
'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa',
'audio': {
'path': 'librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 44100
},
}
```
### Data Fields
`path` (`string`): The path to the audio file
`language` (`string`): The language of the audio file
`reader` (`string`): The reader Id in LibriVox
`sentence` (`string`): The sentence the user read from the book.
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
### Data Splits
The speech material has only train split.
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
```
| indonesian-nlp/librivox-indonesia | [
"task_categories:automatic-speech-recognition",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:librivox",
"language:ace",
"language:ban",
"language:bug",
"language:ind",
"language:min",
"language:jav",
"language:sun",
"license:cc",
"region:us"
] | 2022-09-03T23:13:16+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ace", "ban", "bug", "ind", "min", "jav", "sun"], "license": "cc", "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["librivox"], "task_categories": ["automatic-speech-recognition"], "pretty_name": "LibriVox Indonesia 1.0"} | 2024-02-01T20:55:53+00:00 | [] | [
"ace",
"ban",
"bug",
"ind",
"min",
"jav",
"sun"
] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-librivox #language-Achinese #language-Balinese #language-Buginese #language-Indonesian #language-Minangkabau #language-Javanese #language-Sundanese #license-cc #region-us
| # Dataset Card for LibriVox Indonesia 1.0
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Point of Contact: Cahya Wirawan
### Dataset Summary
The LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public
domain audiobooks LibriVox. We collected only languages in Indonesia for this dataset.
The original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio
file in the speech dataset now lasts from a few seconds to a maximum of 20 seconds.
We converted the audiobooks to speech datasets using the forced alignment software we developed. It supports
multilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it
for other languages without additional work to train the model.
The dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files
as we collect them.
### Languages
## Dataset Structure
### Data Instances
A typical data point comprises the 'path' to the audio file and its 'sentence'. Additional fields include
'reader' and 'language'.
### Data Fields
'path' ('string'): The path to the audio file
'language' ('string'): The language of the audio file
'reader' ('string'): The reader Id in LibriVox
'sentence' ('string'): The sentence the user read from the book.
'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
### Data Splits
The speech material has only train split.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Public Domain, CC-0
| [
"# Dataset Card for LibriVox Indonesia 1.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Cahya Wirawan",
"### Dataset Summary\nThe LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public \ndomain audiobooks LibriVox. We collected only languages in Indonesia for this dataset. \nThe original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio \nfile in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. \n\nWe converted the audiobooks to speech datasets using the forced alignment software we developed. It supports \nmultilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it \nfor other languages without additional work to train the model.\n\nThe dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files\nas we collect them.",
"### Languages",
"## Dataset Structure",
"### Data Instances\nA typical data point comprises the 'path' to the audio file and its 'sentence'. Additional fields include \n'reader' and 'language'.",
"### Data Fields\n'path' ('string'): The path to the audio file\n\n'language' ('string'): The language of the audio file\n\n'reader' ('string'): The reader Id in LibriVox\n\n'sentence' ('string'): The sentence the user read from the book.\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.",
"### Data Splits\nThe speech material has only train split.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nPublic Domain, CC-0"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-librivox #language-Achinese #language-Balinese #language-Buginese #language-Indonesian #language-Minangkabau #language-Javanese #language-Sundanese #license-cc #region-us \n",
"# Dataset Card for LibriVox Indonesia 1.0",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Cahya Wirawan",
"### Dataset Summary\nThe LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public \ndomain audiobooks LibriVox. We collected only languages in Indonesia for this dataset. \nThe original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio \nfile in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. \n\nWe converted the audiobooks to speech datasets using the forced alignment software we developed. It supports \nmultilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it \nfor other languages without additional work to train the model.\n\nThe dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files\nas we collect them.",
"### Languages",
"## Dataset Structure",
"### Data Instances\nA typical data point comprises the 'path' to the audio file and its 'sentence'. Additional fields include \n'reader' and 'language'.",
"### Data Fields\n'path' ('string'): The path to the audio file\n\n'language' ('string'): The language of the audio file\n\n'reader' ('string'): The reader Id in LibriVox\n\n'sentence' ('string'): The sentence the user read from the book.\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.",
"### Data Splits\nThe speech material has only train split.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\nPublic Domain, CC-0"
] | [
121,
10,
120,
23,
193,
4,
6,
42,
245,
13,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
11
] | [
"passage: TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-1K<n<10K #source_datasets-librivox #language-Achinese #language-Balinese #language-Buginese #language-Indonesian #language-Minangkabau #language-Javanese #language-Sundanese #license-cc #region-us \n# Dataset Card for LibriVox Indonesia 1.0## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Cahya Wirawan### Dataset Summary\nThe LibriVox Indonesia dataset consists of MP3 audio and a corresponding text file we generated from the public \ndomain audiobooks LibriVox. We collected only languages in Indonesia for this dataset. \nThe original LibriVox audiobooks or sound files' duration varies from a few minutes to a few hours. Each audio \nfile in the speech dataset now lasts from a few seconds to a maximum of 20 seconds. \n\nWe converted the audiobooks to speech datasets using the forced alignment software we developed. It supports \nmultilingual, including low-resource languages, such as Acehnese, Balinese, or Minangkabau. We can also use it \nfor other languages without additional work to train the model.\n\nThe dataset currently consists of 8 hours in 7 languages from Indonesia. We will add more languages or audio files\nas we collect them.### Languages## Dataset Structure"
] |
01747f9e3b36fb579319d40898936edcd1a2a6af | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-76e071-15436137 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-03T23:20:39+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae", "mse", "rouge", "squad"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-04T19:49:44+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @samuelallen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
13,
104,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] |
c5eeea30aae0f63dcdad307f32e4009865949f14 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-fd18e2-15446138 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-03T23:20:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae", "mse", "rouge", "squad"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-04T19:46:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @samuelallen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
13,
104,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] |
31825c0782fc7a127974c4b9bbdbc9a94a76fbdc | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-8aef96-15456139 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-03T23:49:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae", "mse", "rouge", "squad"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-04T20:11:30+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @samuelallen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
13,
104,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] |
0d2ac8812872b678eb58191d0bf31a5d291c3759 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-25032a-15466140 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-03T23:49:53+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae", "mse", "rouge", "squad"], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "train", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-04T20:07:41+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @samuelallen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
13,
104,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] |
72c2361371b0b7483028f438a82af75b3554d689 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: train
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-096051-15476141 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-04T00:36:56+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "train", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-04T01:25:02+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: train
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @samuelallen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
13,
101,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: train\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] |
ad3dd0050b0c4d75e84eeaad39020c9499a4c0ce | This is a resume sentence classification dataset constructed based on resume text.(https://www.kaggle.com/datasets/oo7kartik/resume-text-batch)
The dataset have five category.(experience education knowledge project others ) And three element label(header content meta).
Because the dataset is a published paper, if you want to use this dataset in a paper or work, please cite BibTex.
@article{甘程光2021英文履歴書データ抽出システムへの,
title={英文履歴書データ抽出システムへの BERT 適用性の検討},
author={甘程光 and 高橋良英 and others},
journal={2021 年度 情報処理学会関西支部 支部大会 講演論文集},
volume={2021},
year={2021}
} | ganchengguang/resume-5label-classification | [
"region:us"
] | 2022-09-04T01:37:54+00:00 | {} | 2022-09-04T01:53:22+00:00 | [] | [] | TAGS
#region-us
| This is a resume sentence classification dataset constructed based on resume text.(URL)
The dataset have five category.(experience education knowledge project others ) And three element label(header content meta).
Because the dataset is a published paper, if you want to use this dataset in a paper or work, please cite BibTex.
@article{甘程光2021英文履歴書データ抽出システムへの,
title={英文履歴書データ抽出システムへの BERT 適用性の検討},
author={甘程光 and 高橋良英 and others},
journal={2021 年度 情報処理学会関西支部 支部大会 講演論文集},
volume={2021},
year={2021}
} | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
f119500feb836ba3656b0fb9aa6b5291f53c92e9 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-xsum-default-a80438-15496142 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-04T01:39:11+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["xsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": [], "dataset_name": "xsum", "dataset_config": "default", "dataset_split": "test", "col_mapping": {"text": "document", "target": "summary"}}} | 2022-09-04T02:28:51+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: xsum
* Config: default
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @samuelallen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
13,
100,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: xsum\n* Config: default\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] |
f7f6abf17cdb0a878c12cc9bca448a2cb710357f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-cnn_dailymail-3.0.0-01441a-15506143 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-04T01:39:17+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-09-04T02:30:03+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @samuelallen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
13,
104,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] |
d8da37c6401feb23c939245046f08ea4b1ad4f94 |
# Dataset Card for lener_br_text_to_lm
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The LeNER-Br language modeling dataset is a collection of legal texts
in Portuguese from the LeNER-Br dataset (https://cic.unb.br/~teodecampos/LeNER-Br/).
The legal texts were obtained from the original token classification Hugging Face
LeNER-Br dataset (https://huggingface.co/datasets/lener_br) and processed to create
a DatasetDict with train and validation dataset (20%).
The LeNER-Br language modeling dataset allows the finetuning of language models
as BERTimbau base and large.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
```
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 8316
})
test: Dataset({
features: ['text'],
num_rows: 2079
})
})
```
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset. | Luciano/lener_br_text_to_lm | [
"task_categories:fill-mask",
"task_categories:text-generation",
"task_ids:masked-language-modeling",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:pt",
"region:us"
] | 2022-09-04T09:36:21+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["pt"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["fill-mask", "text-generation"], "task_ids": ["masked-language-modeling", "language-modeling"], "pretty_name": "The LeNER-Br language modeling dataset is a collection of legal texts in Portuguese from the LeNER-Br dataset (https://cic.unb.br/~teodecampos/LeNER-Br/).\n\nThe legal texts were obtained from the original token classification Hugging Face LeNER-Br dataset (https://huggingface.co/datasets/lener_br) and processed to create a DatasetDict with train and validation dataset (20%).\n\nThe LeNER-Br language modeling dataset allows the finetuning of language models as BERTimbau base and large.", "tags": []} | 2022-09-04T10:32:31+00:00 | [] | [
"pt"
] | TAGS
#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #multilinguality-monolingual #size_categories-10K<n<100K #language-Portuguese #region-us
|
# Dataset Card for lener_br_text_to_lm
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
The LeNER-Br language modeling dataset is a collection of legal texts
in Portuguese from the LeNER-Br dataset (URL
The legal texts were obtained from the original token classification Hugging Face
LeNER-Br dataset (URL and processed to create
a DatasetDict with train and validation dataset (20%).
The LeNER-Br language modeling dataset allows the finetuning of language models
as BERTimbau base and large.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset. | [
"# Dataset Card for lener_br_text_to_lm",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThe LeNER-Br language modeling dataset is a collection of legal texts\n in Portuguese from the LeNER-Br dataset (URL\n\n\n The legal texts were obtained from the original token classification Hugging Face\n LeNER-Br dataset (URL and processed to create\n a DatasetDict with train and validation dataset (20%).\n\n\n The LeNER-Br language modeling dataset allows the finetuning of language models\n as BERTimbau base and large.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #multilinguality-monolingual #size_categories-10K<n<100K #language-Portuguese #region-us \n",
"# Dataset Card for lener_br_text_to_lm",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nThe LeNER-Br language modeling dataset is a collection of legal texts\n in Portuguese from the LeNER-Br dataset (URL\n\n\n The legal texts were obtained from the original token classification Hugging Face\n LeNER-Br dataset (URL and processed to create\n a DatasetDict with train and validation dataset (20%).\n\n\n The LeNER-Br language modeling dataset allows the finetuning of language models\n as BERTimbau base and large.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
77,
15,
125,
24,
112,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
19
] | [
"passage: TAGS\n#task_categories-fill-mask #task_categories-text-generation #task_ids-masked-language-modeling #task_ids-language-modeling #multilinguality-monolingual #size_categories-10K<n<100K #language-Portuguese #region-us \n# Dataset Card for lener_br_text_to_lm## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:### Dataset Summary\n\nThe LeNER-Br language modeling dataset is a collection of legal texts\n in Portuguese from the LeNER-Br dataset (URL\n\n\n The legal texts were obtained from the original token classification Hugging Face\n LeNER-Br dataset (URL and processed to create\n a DatasetDict with train and validation dataset (20%).\n\n\n The LeNER-Br language modeling dataset allows the finetuning of language models\n as BERTimbau base and large.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information"
] |
2636f596c4acb3c8832f51a7048f02b117226453 | # Dataset Card for MetaQA Agents' Predictions
## Dataset Description
- **Repository:** [MetaQA's Repository](https://github.com/UKPLab/MetaQA)
- **Paper:** [MetaQA: Combining Expert Agents for Multi-Skill Question Answering](https://arxiv.org/abs/2112.01922)
- **Point of Contact:** [Haritz Puerto](mailto:[email protected])
## Dataset Summary
This dataset contains the answer predictions of the QA agents for the [QA datasets](https://huggingface.co/datasets/haritzpuerto/MetaQA_Datasets) used in [MetaQA paper](https://arxiv.org/abs/2112.01922). In particular, it contains the following QA agents' predictions:
### Span-Extraction Agents
- Agent: Span-BERT Large (Joshi et al.,2020) trained on SQuAD. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on NewsQA. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on HotpotQA. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on SearchQA. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on Natural Questions. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on TriviaQA-web. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on QAMR. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on DuoRC. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on DROP. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
### Multiple-Choice Agents
- Agent: RoBERTa Large (Liu et al., 2019) trained on RACE. Predictions for:
- RACE
- Commonsense QA
- BoolQ
- HellaSWAG
- Social IQA
- Agent: RoBERTa Large (Liu et al., 2019) trained on HellaSWAG. Predictions for:
- RACE
- Commonsense QA
- BoolQ
- HellaSWAG
- Social IQA
- Agent: AlBERT xxlarge-v2 (Lan et al., 2020) trained on Commonsense QA. Predictions for:
- RACE
- Commonsense QA
- BoolQ
- HellaSWAG
- Social IQA
- Agent: BERT Large-wwm (Devlin et al., 2019) trained on BoolQ. Predictions for:
- BoolQ
### Abstractive Agents
- Agent: TASE (Segal et al., 2020) trained on DROP. Predictions for:
- DROP
- Agent: BART Large with Adapters (Pfeiffer et al., 2020) trained on NarrativeQA. Predictions for:
- NarrativeQA
### Multimodal Agents
- Agent: Hybrider (Chen et al., 2020) trained on HybridQA. Predictions for:
- HybridQA
### Languages
All the QA datasets used English and thus, the Agents's predictions are also in English.
## Dataset Structure
Each agent has a folder. Inside, there is a folder for each dataset containing four files:
- predict_nbest_predictions.json
- predict_predictions.json / predictions.json
- predict_results.json (for span-extraction agents)
### Structure of predict_nbest_predictions.json
```
{id: [{"start_logit": ...,
"end_logit": ...,
"text": ...,
"probability": ... }]}
```
### Structure of predict_predictions.json
```
{id: answer_text}
```
### Data Splits
All the QA datasets have 3 splits: train, validation, and test. The splits (Question-Context pairs) are provided in https://huggingface.co/datasets/haritzpuerto/MetaQA_Datasets
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help developing new multi-agent models and analyzing the predictions of QA models.
### Discussion of Biases
The QA models used to create this predictions may not be perfect, generate false data, and contain biases. The release of these predictions may help to identify these flaws in the models.
## Additional Information
### License
The MetaQA Agents' prediction dataset version is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
### Citation
```
@article{Puerto2021MetaQACE,
title={MetaQA: Combining Expert Agents for Multi-Skill Question Answering},
author={Haritz Puerto and Gozde Gul cSahin and Iryna Gurevych},
journal={ArXiv},
year={2021},
volume={abs/2112.01922}
}
``` | haritzpuerto/MetaQA_Agents_Predictions | [
"task_categories:question-answering",
"multilinguality:monolingual",
"source_datasets:mrqa",
"source_datasets:duorc",
"source_datasets:qamr",
"source_datasets:boolq",
"source_datasets:commonsense_qa",
"source_datasets:hellaswag",
"source_datasets:social_i_qa",
"source_datasets:narrativeqa",
"language:en",
"license:apache-2.0",
"multi-agent question answering",
"multi-agent QA",
"predictions",
"arxiv:2112.01922",
"region:us"
] | 2022-09-04T14:50:38+00:00 | {"annotations_creators": [], "language_creators": [], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": [], "source_datasets": ["mrqa", "duorc", "qamr", "boolq", "commonsense_qa", "hellaswag", "social_i_qa", "narrativeqa"], "task_categories": ["question-answering"], "task_ids": [], "paperswithcode_id": "metaqa-combining-expert-agents-for-multi", "pretty_name": "MetaQA Agents' Predictions", "tags": ["multi-agent question answering", "multi-agent QA", "predictions"]} | 2022-09-04T19:16:51+00:00 | [
"2112.01922"
] | [
"en"
] | TAGS
#task_categories-question-answering #multilinguality-monolingual #source_datasets-mrqa #source_datasets-duorc #source_datasets-qamr #source_datasets-boolq #source_datasets-commonsense_qa #source_datasets-hellaswag #source_datasets-social_i_qa #source_datasets-narrativeqa #language-English #license-apache-2.0 #multi-agent question answering #multi-agent QA #predictions #arxiv-2112.01922 #region-us
| # Dataset Card for MetaQA Agents' Predictions
## Dataset Description
- Repository: MetaQA's Repository
- Paper: MetaQA: Combining Expert Agents for Multi-Skill Question Answering
- Point of Contact: Haritz Puerto
## Dataset Summary
This dataset contains the answer predictions of the QA agents for the QA datasets used in MetaQA paper. In particular, it contains the following QA agents' predictions:
### Span-Extraction Agents
- Agent: Span-BERT Large (Joshi et al.,2020) trained on SQuAD. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on NewsQA. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on HotpotQA. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on SearchQA. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on Natural Questions. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on TriviaQA-web. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on QAMR. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on DuoRC. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
- Agent: Span-BERT Large (Joshi et al.,2020) trained on DROP. Predictions for:
- SQuAD
- NewsQA
- HotpotQA
- SearchQA
- Natural Questions
- TriviaQA-web
- QAMR
- DuoRC
- DROP
### Multiple-Choice Agents
- Agent: RoBERTa Large (Liu et al., 2019) trained on RACE. Predictions for:
- RACE
- Commonsense QA
- BoolQ
- HellaSWAG
- Social IQA
- Agent: RoBERTa Large (Liu et al., 2019) trained on HellaSWAG. Predictions for:
- RACE
- Commonsense QA
- BoolQ
- HellaSWAG
- Social IQA
- Agent: AlBERT xxlarge-v2 (Lan et al., 2020) trained on Commonsense QA. Predictions for:
- RACE
- Commonsense QA
- BoolQ
- HellaSWAG
- Social IQA
- Agent: BERT Large-wwm (Devlin et al., 2019) trained on BoolQ. Predictions for:
- BoolQ
### Abstractive Agents
- Agent: TASE (Segal et al., 2020) trained on DROP. Predictions for:
- DROP
- Agent: BART Large with Adapters (Pfeiffer et al., 2020) trained on NarrativeQA. Predictions for:
- NarrativeQA
### Multimodal Agents
- Agent: Hybrider (Chen et al., 2020) trained on HybridQA. Predictions for:
- HybridQA
### Languages
All the QA datasets used English and thus, the Agents's predictions are also in English.
## Dataset Structure
Each agent has a folder. Inside, there is a folder for each dataset containing four files:
- predict_nbest_predictions.json
- predict_predictions.json / URL
- predict_results.json (for span-extraction agents)
### Structure of predict_nbest_predictions.json
### Structure of predict_predictions.json
### Data Splits
All the QA datasets have 3 splits: train, validation, and test. The splits (Question-Context pairs) are provided in URL
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help developing new multi-agent models and analyzing the predictions of QA models.
### Discussion of Biases
The QA models used to create this predictions may not be perfect, generate false data, and contain biases. The release of these predictions may help to identify these flaws in the models.
## Additional Information
### License
The MetaQA Agents' prediction dataset version is released under the Apache-2.0 License.
| [
"# Dataset Card for MetaQA Agents' Predictions",
"## Dataset Description\n- Repository: MetaQA's Repository\n- Paper: MetaQA: Combining Expert Agents for Multi-Skill Question Answering\n- Point of Contact: Haritz Puerto",
"## Dataset Summary\nThis dataset contains the answer predictions of the QA agents for the QA datasets used in MetaQA paper. In particular, it contains the following QA agents' predictions:",
"### Span-Extraction Agents\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on SQuAD. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on NewsQA. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on HotpotQA. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on SearchQA. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on Natural Questions. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on TriviaQA-web. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on QAMR. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on DuoRC. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on DROP. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP",
"### Multiple-Choice Agents\n- Agent: RoBERTa Large (Liu et al., 2019) trained on RACE. Predictions for:\n - RACE\n - Commonsense QA\n - BoolQ\n - HellaSWAG\n - Social IQA\n\n- Agent: RoBERTa Large (Liu et al., 2019) trained on HellaSWAG. Predictions for:\n - RACE\n - Commonsense QA\n - BoolQ\n - HellaSWAG\n - Social IQA\n\n- Agent: AlBERT xxlarge-v2 (Lan et al., 2020) trained on Commonsense QA. Predictions for:\n - RACE\n - Commonsense QA\n - BoolQ\n - HellaSWAG\n - Social IQA\n\n- Agent: BERT Large-wwm (Devlin et al., 2019) trained on BoolQ. Predictions for:\n - BoolQ",
"### Abstractive Agents\n- Agent: TASE (Segal et al., 2020) trained on DROP. Predictions for:\n - DROP\n\n- Agent: BART Large with Adapters (Pfeiffer et al., 2020) trained on NarrativeQA. Predictions for:\n - NarrativeQA",
"### Multimodal Agents\n- Agent: Hybrider (Chen et al., 2020) trained on HybridQA. Predictions for:\n - HybridQA",
"### Languages\nAll the QA datasets used English and thus, the Agents's predictions are also in English.",
"## Dataset Structure\nEach agent has a folder. Inside, there is a folder for each dataset containing four files:\n- predict_nbest_predictions.json\n- predict_predictions.json / URL\n- predict_results.json (for span-extraction agents)",
"### Structure of predict_nbest_predictions.json",
"### Structure of predict_predictions.json",
"### Data Splits\nAll the QA datasets have 3 splits: train, validation, and test. The splits (Question-Context pairs) are provided in URL",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe purpose of this dataset is to help developing new multi-agent models and analyzing the predictions of QA models.",
"### Discussion of Biases\nThe QA models used to create this predictions may not be perfect, generate false data, and contain biases. The release of these predictions may help to identify these flaws in the models.",
"## Additional Information",
"### License\nThe MetaQA Agents' prediction dataset version is released under the Apache-2.0 License."
] | [
"TAGS\n#task_categories-question-answering #multilinguality-monolingual #source_datasets-mrqa #source_datasets-duorc #source_datasets-qamr #source_datasets-boolq #source_datasets-commonsense_qa #source_datasets-hellaswag #source_datasets-social_i_qa #source_datasets-narrativeqa #language-English #license-apache-2.0 #multi-agent question answering #multi-agent QA #predictions #arxiv-2112.01922 #region-us \n",
"# Dataset Card for MetaQA Agents' Predictions",
"## Dataset Description\n- Repository: MetaQA's Repository\n- Paper: MetaQA: Combining Expert Agents for Multi-Skill Question Answering\n- Point of Contact: Haritz Puerto",
"## Dataset Summary\nThis dataset contains the answer predictions of the QA agents for the QA datasets used in MetaQA paper. In particular, it contains the following QA agents' predictions:",
"### Span-Extraction Agents\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on SQuAD. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on NewsQA. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on HotpotQA. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on SearchQA. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on Natural Questions. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on TriviaQA-web. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on QAMR. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on DuoRC. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on DROP. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP",
"### Multiple-Choice Agents\n- Agent: RoBERTa Large (Liu et al., 2019) trained on RACE. Predictions for:\n - RACE\n - Commonsense QA\n - BoolQ\n - HellaSWAG\n - Social IQA\n\n- Agent: RoBERTa Large (Liu et al., 2019) trained on HellaSWAG. Predictions for:\n - RACE\n - Commonsense QA\n - BoolQ\n - HellaSWAG\n - Social IQA\n\n- Agent: AlBERT xxlarge-v2 (Lan et al., 2020) trained on Commonsense QA. Predictions for:\n - RACE\n - Commonsense QA\n - BoolQ\n - HellaSWAG\n - Social IQA\n\n- Agent: BERT Large-wwm (Devlin et al., 2019) trained on BoolQ. Predictions for:\n - BoolQ",
"### Abstractive Agents\n- Agent: TASE (Segal et al., 2020) trained on DROP. Predictions for:\n - DROP\n\n- Agent: BART Large with Adapters (Pfeiffer et al., 2020) trained on NarrativeQA. Predictions for:\n - NarrativeQA",
"### Multimodal Agents\n- Agent: Hybrider (Chen et al., 2020) trained on HybridQA. Predictions for:\n - HybridQA",
"### Languages\nAll the QA datasets used English and thus, the Agents's predictions are also in English.",
"## Dataset Structure\nEach agent has a folder. Inside, there is a folder for each dataset containing four files:\n- predict_nbest_predictions.json\n- predict_predictions.json / URL\n- predict_results.json (for span-extraction agents)",
"### Structure of predict_nbest_predictions.json",
"### Structure of predict_predictions.json",
"### Data Splits\nAll the QA datasets have 3 splits: train, validation, and test. The splits (Question-Context pairs) are provided in URL",
"## Considerations for Using the Data",
"### Social Impact of Dataset\nThe purpose of this dataset is to help developing new multi-agent models and analyzing the predictions of QA models.",
"### Discussion of Biases\nThe QA models used to create this predictions may not be perfect, generate false data, and contain biases. The release of these predictions may help to identify these flaws in the models.",
"## Additional Information",
"### License\nThe MetaQA Agents' prediction dataset version is released under the Apache-2.0 License."
] | [
145,
13,
43,
47,
573,
196,
71,
35,
29,
67,
17,
14,
41,
8,
34,
51,
5,
24
] | [
"passage: TAGS\n#task_categories-question-answering #multilinguality-monolingual #source_datasets-mrqa #source_datasets-duorc #source_datasets-qamr #source_datasets-boolq #source_datasets-commonsense_qa #source_datasets-hellaswag #source_datasets-social_i_qa #source_datasets-narrativeqa #language-English #license-apache-2.0 #multi-agent question answering #multi-agent QA #predictions #arxiv-2112.01922 #region-us \n# Dataset Card for MetaQA Agents' Predictions## Dataset Description\n- Repository: MetaQA's Repository\n- Paper: MetaQA: Combining Expert Agents for Multi-Skill Question Answering\n- Point of Contact: Haritz Puerto## Dataset Summary\nThis dataset contains the answer predictions of the QA agents for the QA datasets used in MetaQA paper. In particular, it contains the following QA agents' predictions:",
"passage: ### Span-Extraction Agents\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on SQuAD. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on NewsQA. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on HotpotQA. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on SearchQA. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on Natural Questions. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on TriviaQA-web. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on QAMR. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on DuoRC. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP\n- Agent: Span-BERT Large (Joshi et al.,2020) trained on DROP. Predictions for:\n - SQuAD\n - NewsQA\n - HotpotQA\n - SearchQA\n - Natural Questions\n - TriviaQA-web\n - QAMR\n - DuoRC\n - DROP### Multiple-Choice Agents\n- Agent: RoBERTa Large (Liu et al., 2019) trained on RACE. Predictions for:\n - RACE\n - Commonsense QA\n - BoolQ\n - HellaSWAG\n - Social IQA\n\n- Agent: RoBERTa Large (Liu et al., 2019) trained on HellaSWAG. Predictions for:\n - RACE\n - Commonsense QA\n - BoolQ\n - HellaSWAG\n - Social IQA\n\n- Agent: AlBERT xxlarge-v2 (Lan et al., 2020) trained on Commonsense QA. Predictions for:\n - RACE\n - Commonsense QA\n - BoolQ\n - HellaSWAG\n - Social IQA\n\n- Agent: BERT Large-wwm (Devlin et al., 2019) trained on BoolQ. Predictions for:\n - BoolQ### Abstractive Agents\n- Agent: TASE (Segal et al., 2020) trained on DROP. Predictions for:\n - DROP\n\n- Agent: BART Large with Adapters (Pfeiffer et al., 2020) trained on NarrativeQA. Predictions for:\n - NarrativeQA### Multimodal Agents\n- Agent: Hybrider (Chen et al., 2020) trained on HybridQA. Predictions for:\n - HybridQA### Languages\nAll the QA datasets used English and thus, the Agents's predictions are also in English.## Dataset Structure\nEach agent has a folder. Inside, there is a folder for each dataset containing four files:\n- predict_nbest_predictions.json\n- predict_predictions.json / URL\n- predict_results.json (for span-extraction agents)### Structure of predict_nbest_predictions.json### Structure of predict_predictions.json### Data Splits\nAll the QA datasets have 3 splits: train, validation, and test. The splits (Question-Context pairs) are provided in URL## Considerations for Using the Data"
] |
a189eae9498de2ace8b54290c3f94b7286a4c7c2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen1234](https://huggingface.co/SamuelAllen1234) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-0e4017-15526144 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-04T15:42:49+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-04T15:46:04+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @SamuelAllen1234 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen1234 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen1234 for evaluating this model."
] | [
13,
86,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @SamuelAllen1234 for evaluating this model."
] |
9d4e8f919e11525f564bd99fdfa71164b26c299a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen1234](https://huggingface.co/SamuelAllen1234) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-a4ff98-15536145 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-04T15:42:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": [], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-04T15:46:49+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @SamuelAllen1234 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen1234 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen1234 for evaluating this model."
] | [
13,
102,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @SamuelAllen1234 for evaluating this model."
] |
02de1f4f6049b8d7f53d924789fbf67aa5244139 | KopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, [allenai/nllb](https://huggingface.co/datasets/allenai/nllb)
each language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
detail soon | acul3/KoPI-NLLB | [
"region:us"
] | 2022-09-04T15:52:01+00:00 | {} | 2022-09-06T04:49:03+00:00 | [] | [] | TAGS
#region-us
| KopI(Korpus Perayapan Indonesia)-NLLB, is Indonesian family language(aceh,bali,banjar,indonesia,jawa,minang,sunda) only extracted from NLLB Dataset, allenai/nllb
each language set also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
detail soon | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
654c7c822d4e30e593b84c0d17ffe8f5415596d5 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen1234/testing
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen12345](https://huggingface.co/SamuelAllen12345) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-70f55d-15546146 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-04T17:24:29+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen1234/testing", "metrics": ["rouge", "mse", "mae", "squad"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-04T17:28:25+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen1234/testing
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @SamuelAllen12345 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen1234/testing\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen12345 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen1234/testing\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen12345 for evaluating this model."
] | [
13,
84,
19
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen1234/testing\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @SamuelAllen12345 for evaluating this model."
] |
df39f858b9b08963848eeab993371aefa449f435 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen12345](https://huggingface.co/SamuelAllen12345) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-85416c-15556147 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-04T17:24:32+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "facebook/bart-large-cnn", "metrics": ["rouge", "mse", "mae", "squad"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-04T17:27:44+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: facebook/bart-large-cnn
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @SamuelAllen12345 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen12345 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen12345 for evaluating this model."
] | [
13,
86,
19
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: facebook/bart-large-cnn\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @SamuelAllen12345 for evaluating this model."
] |
0c95d910357f5e262bd04790e5122eda781573fe | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@jsfs11](https://huggingface.co/jsfs11) for evaluating this model. | autoevaluate/autoeval-staging-eval-squad_v2-squad_v2-00af64-15586150 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-05T01:39:12+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "21iridescent/RoBERTa-base-finetuned-squad2-lwt", "metrics": [], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-09-05T01:42:07+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Question Answering
* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @jsfs11 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jsfs11 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @jsfs11 for evaluating this model."
] | [
13,
104,
16
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Question Answering\n* Model: 21iridescent/RoBERTa-base-finetuned-squad2-lwt\n* Dataset: squad_v2\n* Config: squad_v2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @jsfs11 for evaluating this model."
] |
bb02409110bba66779b85f0271cef0f482f04404 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen123](https://huggingface.co/SamuelAllen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-175281-15596151 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-05T02:42:22+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mse"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-05T02:46:20+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @SamuelAllen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen123 for evaluating this model."
] | [
13,
102,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @SamuelAllen123 for evaluating this model."
] |
a63bf346e599e6796a015f39c17baa988b9e9f7e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@SamuelAllen123](https://huggingface.co/SamuelAllen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-41c5cd-15606152 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-05T02:42:25+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mae"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-05T02:46:21+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @SamuelAllen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @SamuelAllen123 for evaluating this model."
] | [
13,
102,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @SamuelAllen123 for evaluating this model."
] |
3cb8c00aa2e79441a8358d44e42652bc6c90e10a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@samuelallen123](https://huggingface.co/samuelallen123) for evaluating this model. | autoevaluate/autoeval-staging-eval-samsum-samsum-cc5bdf-15616153 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-09-05T02:42:31+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["samsum"], "eval_info": {"task": "summarization", "model": "SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum", "metrics": ["mse"], "dataset_name": "samsum", "dataset_config": "samsum", "dataset_split": "validation", "col_mapping": {"text": "dialogue", "target": "summary"}}} | 2022-09-05T02:47:47+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum
* Dataset: samsum
* Config: samsum
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @samuelallen123 for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] | [
13,
102,
18
] | [
"passage: TAGS\n#autotrain #evaluation #region-us \n# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: SamuelAllen123/t5-efficient-large-nl36_fine_tuned_for_sum\n* Dataset: samsum\n* Config: samsum\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.## Contributions\n\nThanks to @samuelallen123 for evaluating this model."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.