sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
b810f993848f15b9d0832de334da2a75ace592c0 |
# Dataset Card for JNLPBA
## Dataset Description
- **Homepage:** http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
NER For Bio-Entities
## Citation Information
```
@inproceedings{collier-kim-2004-introduction,
title = "Introduction to the Bio-entity Recognition Task at {JNLPBA}",
author = "Collier, Nigel and Kim, Jin-Dong",
booktitle = "Proceedings of the International Joint Workshop
on Natural Language Processing in Biomedicine and its Applications
({NLPBA}/{B}io{NLP})",
month = aug # " 28th and 29th", year = "2004",
address = "Geneva, Switzerland",
publisher = "COLING",
url = "https://aclanthology.org/W04-1213",
pages = "73--78",
}
```
| bigbio/jnlpba | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2022-11-13T22:09:04+00:00 | {"language": ["en"], "license": "cc-by-3.0", "multilinguality": "monolingual", "pretty_name": "JNLPBA", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_3p0", "homepage": "http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:44:48+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-3.0 #region-us
|
# Dataset Card for JNLPBA
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
NER For Bio-Entities
| [
"# Dataset Card for JNLPBA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nNER For Bio-Entities"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-3.0 #region-us \n",
"# Dataset Card for JNLPBA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nNER For Bio-Entities"
] |
296f506a445246476178df0d6e749a04cb29aafd |
# Dataset Card for LINNAEUS
## Dataset Description
- **Homepage:** http://linnaeus.sourceforge.net/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
Linnaeus is a novel corpus of full-text documents manually annotated for species mentions.
## Citation Information
```
@Article{gerner2010linnaeus,
title={LINNAEUS: a species name identification system for biomedical literature},
author={Gerner, Martin and Nenadic, Goran and Bergman, Casey M},
journal={BMC bioinformatics},
volume={11},
number={1},
pages={1--17},
year={2010},
publisher={BioMed Central}
}
```
| bigbio/linnaeus | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:09:07+00:00 | {"language": ["en"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "LINNAEUS", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "http://linnaeus.sourceforge.net/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:44:50+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for LINNAEUS
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
Linnaeus is a novel corpus of full-text documents manually annotated for species mentions.
| [
"# Dataset Card for LINNAEUS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nLinnaeus is a novel corpus of full-text documents manually annotated for species mentions."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for LINNAEUS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nLinnaeus is a novel corpus of full-text documents manually annotated for species mentions."
] |
cbcbb30006c80d35bcb380bc9f4b79b36c5123e1 |
# Dataset Card for LLL05
## Dataset Description
- **Homepage:** http://genome.jouy.inra.fr/texte/LLLchallenge
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The LLL05 challenge task is to learn rules to extract protein/gene interactions from biology abstracts from the Medline
bibliography database. The goal of the challenge is to test the ability of the participating IE systems to identify the
interactions and the gene/proteins that interact. The participants will test their IE patterns on a test set with the
aim of extracting the correct agent and target.The challenge focuses on information extraction of gene interactions in
Bacillus subtilis. Extracting gene interaction is the most popular event IE task in biology. Bacillus subtilis (Bs) is
a model bacterium and many papers have been published on direct gene interactions involved in sporulation. The gene
interactions are generally mentioned in the abstract and the full text of the paper is not needed. Extracting gene
interaction means, extracting the agent (proteins) and the target (genes) of all couples of genic interactions from
sentences.
## Citation Information
```
@article{article,
author = {Nédellec, C.},
year = {2005},
month = {01},
pages = {},
title = {Learning Language in Logic - Genic Interaction Extraction Challenge},
journal = {Proceedings of the Learning Language in Logic 2005 Workshop at the International Conference on Machine Learning}
}
```
| bigbio/lll | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:09:11+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "LLL05", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "http://genome.jouy.inra.fr/texte/LLLchallenge", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["RELATION_EXTRACTION"]} | 2022-12-22T15:44:52+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for LLL05
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: RE
The LLL05 challenge task is to learn rules to extract protein/gene interactions from biology abstracts from the Medline
bibliography database. The goal of the challenge is to test the ability of the participating IE systems to identify the
interactions and the gene/proteins that interact. The participants will test their IE patterns on a test set with the
aim of extracting the correct agent and target.The challenge focuses on information extraction of gene interactions in
Bacillus subtilis. Extracting gene interaction is the most popular event IE task in biology. Bacillus subtilis (Bs) is
a model bacterium and many papers have been published on direct gene interactions involved in sporulation. The gene
interactions are generally mentioned in the abstract and the full text of the paper is not needed. Extracting gene
interaction means, extracting the agent (proteins) and the target (genes) of all couples of genic interactions from
sentences.
| [
"# Dataset Card for LLL05",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE\n\n\nThe LLL05 challenge task is to learn rules to extract protein/gene interactions from biology abstracts from the Medline\nbibliography database. The goal of the challenge is to test the ability of the participating IE systems to identify the\ninteractions and the gene/proteins that interact. The participants will test their IE patterns on a test set with the\naim of extracting the correct agent and target.The challenge focuses on information extraction of gene interactions in\nBacillus subtilis. Extracting gene interaction is the most popular event IE task in biology. Bacillus subtilis (Bs) is\na model bacterium and many papers have been published on direct gene interactions involved in sporulation. The gene\ninteractions are generally mentioned in the abstract and the full text of the paper is not needed. Extracting gene\ninteraction means, extracting the agent (proteins) and the target (genes) of all couples of genic interactions from\nsentences."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for LLL05",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE\n\n\nThe LLL05 challenge task is to learn rules to extract protein/gene interactions from biology abstracts from the Medline\nbibliography database. The goal of the challenge is to test the ability of the participating IE systems to identify the\ninteractions and the gene/proteins that interact. The participants will test their IE patterns on a test set with the\naim of extracting the correct agent and target.The challenge focuses on information extraction of gene interactions in\nBacillus subtilis. Extracting gene interaction is the most popular event IE task in biology. Bacillus subtilis (Bs) is\na model bacterium and many papers have been published on direct gene interactions involved in sporulation. The gene\ninteractions are generally mentioned in the abstract and the full text of the paper is not needed. Extracting gene\ninteraction means, extracting the agent (proteins) and the target (genes) of all couples of genic interactions from\nsentences."
] |
b66cce4f3ba806347564e325cde767eca1238ef9 |
# Dataset Card for MayoSRS
## Dataset Description
- **Homepage:** https://conservancy.umn.edu/handle/11299/196265
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
MayoSRS consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic.
## Citation Information
```
@article{pedersen2007measures,
title={Measures of semantic similarity and relatedness in the biomedical domain},
author={Pedersen, Ted and Pakhomov, Serguei VS and Patwardhan, Siddharth and Chute, Christopher G},
journal={Journal of biomedical informatics},
volume={40},
number={3},
pages={288--299},
year={2007},
publisher={Elsevier}
}
```
| bigbio/mayosrs | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-11-13T22:09:14+00:00 | {"language": ["en"], "license": "cc0-1.0", "multilinguality": "monolingual", "pretty_name": "MayoSRS", "bigbio_language": ["English"], "bigbio_license_shortname": "CC0_1p0", "homepage": "https://conservancy.umn.edu/handle/11299/196265", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]} | 2022-12-22T15:44:58+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for MayoSRS
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: STS
MayoSRS consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic.
| [
"# Dataset Card for MayoSRS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nMayoSRS consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for MayoSRS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nMayoSRS consists of 101 clinical term pairs whose relatedness was determined by nine medical coders and three physicians from the Mayo Clinic."
] |
9212ff85ce1e6de4cf7418a8fabd21a9350b96f2 |
# Dataset Card for MedQA
## Dataset Description
- **Homepage:** https://github.com/jind11/MedQA
- **Pubmed:** False
- **Public:** True
- **Tasks:** QA
In this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA,
collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and
traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. Together
with the question data, we also collect and release a large-scale corpus from medical textbooks from which the reading
comprehension models can obtain necessary knowledge for answering the questions.
## Citation Information
```
@article{jin2021disease,
title={What disease does this patient have? a large-scale open domain question answering dataset from medical exams},
author={Jin, Di and Pan, Eileen and Oufattole, Nassim and Weng, Wei-Hung and Fang, Hanyi and Szolovits, Peter},
journal={Applied Sciences},
volume={11},
number={14},
pages={6421},
year={2021},
publisher={MDPI}
}
```
| bigbio/med_qa | [
"multilinguality:multilingual",
"language:en",
"language:zh",
"license:unknown",
"region:us"
] | 2022-11-13T22:09:18+00:00 | {"language": ["en", "zh"], "license": "unknown", "multilinguality": "multilingual", "pretty_name": "MedQA", "bigbio_language": ["English", "Chinese (Simplified)", "Chinese (Traditional, Taiwan)"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/jind11/MedQA", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["QUESTION_ANSWERING"]} | 2023-09-26T12:00:32+00:00 | [] | [
"en",
"zh"
] | TAGS
#multilinguality-multilingual #language-English #language-Chinese #license-unknown #region-us
|
# Dataset Card for MedQA
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: QA
In this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA,
collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and
traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. Together
with the question data, we also collect and release a large-scale corpus from medical textbooks from which the reading
comprehension models can obtain necessary knowledge for answering the questions.
| [
"# Dataset Card for MedQA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: QA\n\n\nIn this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA,\ncollected from the professional medical board exams. It covers three languages: English, simplified Chinese, and\ntraditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. Together\nwith the question data, we also collect and release a large-scale corpus from medical textbooks from which the reading\ncomprehension models can obtain necessary knowledge for answering the questions."
] | [
"TAGS\n#multilinguality-multilingual #language-English #language-Chinese #license-unknown #region-us \n",
"# Dataset Card for MedQA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: QA\n\n\nIn this work, we present the first free-form multiple-choice OpenQA dataset for solving medical problems, MedQA,\ncollected from the professional medical board exams. It covers three languages: English, simplified Chinese, and\ntraditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively. Together\nwith the question data, we also collect and release a large-scale corpus from medical textbooks from which the reading\ncomprehension models can obtain necessary knowledge for answering the questions."
] |
652a8b920d5a174dfcdaeb7b655b133d11e21ab1 |
# Dataset Card for MeDAL
## Dataset Description
- **Homepage:** https://github.com/BruceWen120/medal
- **Pubmed:** True
- **Public:** True
- **Tasks:** NED
The Repository for Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (MeDAL) is
a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding
pre-training in the medical domain.
## Citation Information
```
@inproceedings{,
title = {MeDAL\: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining},
author = {Wen, Zhi and Lu, Xing Han and Reddy, Siva},
booktitle = {Proceedings of the 3rd Clinical Natural Language Processing Workshop},
month = {Nov},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/2020.clinicalnlp-1.15},
pages = {130--135},
}
```
| bigbio/medal | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:09:21+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "MeDAL", "bigbio_language": ["English"], "bigbio_license_shortname": "NLM_LICENSE", "homepage": "https://github.com/BruceWen120/medal", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:45:07+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for MeDAL
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NED
The Repository for Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (MeDAL) is
a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding
pre-training in the medical domain.
| [
"# Dataset Card for MeDAL",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NED\n\n\nThe Repository for Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (MeDAL) is\na large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding\npre-training in the medical domain."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for MeDAL",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NED\n\n\nThe Repository for Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (MeDAL) is\na large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding\npre-training in the medical domain."
] |
347303ce0b17954d20f03f0c48a87069c9496590 |
# Dataset Card for MedDialog
## Dataset Description
- **Homepage:** https://github.com/UCSD-AI4H/Medical-Dialogue-System
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from healthcaremagic.com and icliniq.com.
All copyrights of the data belong to healthcaremagic.com and icliniq.com.
## Citation Information
```
@article{DBLP:journals/corr/abs-2004-03329,
author = {Shu Chen and
Zeqian Ju and
Xiangyu Dong and
Hongchao Fang and
Sicheng Wang and
Yue Yang and
Jiaqi Zeng and
Ruisi Zhang and
Ruoyu Zhang and
Meng Zhou and
Penghui Zhu and
Pengtao Xie},
title = {MedDialog: {A} Large-scale Medical Dialogue Dataset},
journal = {CoRR},
volume = {abs/2004.03329},
year = {2020},
url = {https://arxiv.org/abs/2004.03329},
eprinttype = {arXiv},
eprint = {2004.03329},
biburl = {https://dblp.org/rec/journals/corr/abs-2004-03329.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| bigbio/meddialog | [
"multilinguality:multilingual",
"language:en",
"language:zh",
"license:unknown",
"arxiv:2004.03329",
"region:us"
] | 2022-11-13T22:09:25+00:00 | {"language": ["en", "zh"], "license": "unknown", "multilinguality": "multilingual", "pretty_name": "MedDialog", "bigbio_language": ["English", "Chinese"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/UCSD-AI4H/Medical-Dialogue-System", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:45:13+00:00 | [
"2004.03329"
] | [
"en",
"zh"
] | TAGS
#multilinguality-multilingual #language-English #language-Chinese #license-unknown #arxiv-2004.03329 #region-us
|
# Dataset Card for MedDialog
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TXTCLASS
The MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from URL and URL.
All copyrights of the data belong to URL and URL.
| [
"# Dataset Card for MedDialog",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\n\nThe MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from URL and URL.\nAll copyrights of the data belong to URL and URL."
] | [
"TAGS\n#multilinguality-multilingual #language-English #language-Chinese #license-unknown #arxiv-2004.03329 #region-us \n",
"# Dataset Card for MedDialog",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\n\nThe MedDialog dataset (English) contains conversations (in English) between doctors and patients.It has 0.26 million dialogues. The data is continuously growing and more dialogues will be added. The raw dialogues are from URL and URL.\nAll copyrights of the data belong to URL and URL."
] |
4453f85758c27614e375ce875ea8342209cb7158 |
# Dataset Card for MEDDOCAN
## Dataset Description
- **Homepage:** https://temu.bsc.es/meddocan/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER
MEDDOCAN: Medical Document Anonymization Track
This dataset is designed for the MEDDOCAN task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of 1,000 clinical case reports derived from the Spanish Clinical Case Corpus (SPACCC), enriched with PHI expressions.
The annotation of the entire set of entity mentions was carried out by experts annotatorsand it includes 29 entity types relevant for the annonymiation of medical documents.22 of these annotation types are actually present in the corpus: TERRITORIO, FECHAS, EDAD_SUJETO_ASISTENCIA, NOMBRE_SUJETO_ASISTENCIA, NOMBRE_PERSONAL_SANITARIO, SEXO_SUJETO_ASISTENCIA, CALLE, PAIS, ID_SUJETO_ASISTENCIA, CORREO, ID_TITULACION_PERSONAL_SANITARIO,ID_ASEGURAMIENTO, HOSPITAL, FAMILIARES_SUJETO_ASISTENCIA, INSTITUCION, ID_CONTACTO ASISTENCIAL,NUMERO_TELEFONO, PROFESION, NUMERO_FAX, OTROS_SUJETO_ASISTENCIA, CENTRO_SALUD, ID_EMPLEO_PERSONAL_SANITARIO
For further information, please visit https://temu.bsc.es/meddocan/ or send an email to [email protected]
## Citation Information
```
@inproceedings{marimon2019automatic,
title={Automatic De-identification of Medical Texts in Spanish: the MEDDOCAN Track, Corpus, Guidelines, Methods and Evaluation of Results.},
author={Marimon, Montserrat and Gonzalez-Agirre, Aitor and Intxaurrondo, Ander and Rodriguez, Heidy and Martin, Jose Lopez and Villegas, Marta and Krallinger, Martin},
booktitle={IberLEF@ SEPLN},
pages={618--638},
year={2019}
}
```
| bigbio/meddocan | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:09:29+00:00 | {"language": ["es"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "MEDDOCAN", "bigbio_language": ["Spanish"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://temu.bsc.es/meddocan/", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:45:24+00:00 | [] | [
"es"
] | TAGS
#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us
|
# Dataset Card for MEDDOCAN
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: NER
MEDDOCAN: Medical Document Anonymization Track
This dataset is designed for the MEDDOCAN task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of 1,000 clinical case reports derived from the Spanish Clinical Case Corpus (SPACCC), enriched with PHI expressions.
The annotation of the entire set of entity mentions was carried out by experts annotatorsand it includes 29 entity types relevant for the annonymiation of medical documents.22 of these annotation types are actually present in the corpus: TERRITORIO, FECHAS, EDAD_SUJETO_ASISTENCIA, NOMBRE_SUJETO_ASISTENCIA, NOMBRE_PERSONAL_SANITARIO, SEXO_SUJETO_ASISTENCIA, CALLE, PAIS, ID_SUJETO_ASISTENCIA, CORREO, ID_TITULACION_PERSONAL_SANITARIO,ID_ASEGURAMIENTO, HOSPITAL, FAMILIARES_SUJETO_ASISTENCIA, INSTITUCION, ID_CONTACTO ASISTENCIAL,NUMERO_TELEFONO, PROFESION, NUMERO_FAX, OTROS_SUJETO_ASISTENCIA, CENTRO_SALUD, ID_EMPLEO_PERSONAL_SANITARIO
For further information, please visit URL or send an email to encargo-pln-life@URL
| [
"# Dataset Card for MEDDOCAN",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER\n\n\nMEDDOCAN: Medical Document Anonymization Track\n\nThis dataset is designed for the MEDDOCAN task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.\n\nIt is a manually classified collection of 1,000 clinical case reports derived from the Spanish Clinical Case Corpus (SPACCC), enriched with PHI expressions.\n\nThe annotation of the entire set of entity mentions was carried out by experts annotatorsand it includes 29 entity types relevant for the annonymiation of medical documents.22 of these annotation types are actually present in the corpus: TERRITORIO, FECHAS, EDAD_SUJETO_ASISTENCIA, NOMBRE_SUJETO_ASISTENCIA, NOMBRE_PERSONAL_SANITARIO, SEXO_SUJETO_ASISTENCIA, CALLE, PAIS, ID_SUJETO_ASISTENCIA, CORREO, ID_TITULACION_PERSONAL_SANITARIO,ID_ASEGURAMIENTO, HOSPITAL, FAMILIARES_SUJETO_ASISTENCIA, INSTITUCION, ID_CONTACTO ASISTENCIAL,NUMERO_TELEFONO, PROFESION, NUMERO_FAX, OTROS_SUJETO_ASISTENCIA, CENTRO_SALUD, ID_EMPLEO_PERSONAL_SANITARIO\n \nFor further information, please visit URL or send an email to encargo-pln-life@URL"
] | [
"TAGS\n#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us \n",
"# Dataset Card for MEDDOCAN",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER\n\n\nMEDDOCAN: Medical Document Anonymization Track\n\nThis dataset is designed for the MEDDOCAN task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.\n\nIt is a manually classified collection of 1,000 clinical case reports derived from the Spanish Clinical Case Corpus (SPACCC), enriched with PHI expressions.\n\nThe annotation of the entire set of entity mentions was carried out by experts annotatorsand it includes 29 entity types relevant for the annonymiation of medical documents.22 of these annotation types are actually present in the corpus: TERRITORIO, FECHAS, EDAD_SUJETO_ASISTENCIA, NOMBRE_SUJETO_ASISTENCIA, NOMBRE_PERSONAL_SANITARIO, SEXO_SUJETO_ASISTENCIA, CALLE, PAIS, ID_SUJETO_ASISTENCIA, CORREO, ID_TITULACION_PERSONAL_SANITARIO,ID_ASEGURAMIENTO, HOSPITAL, FAMILIARES_SUJETO_ASISTENCIA, INSTITUCION, ID_CONTACTO ASISTENCIAL,NUMERO_TELEFONO, PROFESION, NUMERO_FAX, OTROS_SUJETO_ASISTENCIA, CENTRO_SALUD, ID_EMPLEO_PERSONAL_SANITARIO\n \nFor further information, please visit URL or send an email to encargo-pln-life@URL"
] |
28e21e038985970beaf045341cfc17abe762081c |
# Dataset Card for MedHop
## Dataset Description
- **Homepage:** http://qangaroo.cs.ucl.ac.uk/
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
With the same format as WikiHop, this dataset is based on research paper
abstracts from PubMed, and the queries are about interactions between
pairs of drugs. The correct answer has to be inferred by combining
information from a chain of reactions of drugs and proteins.
## Citation Information
```
@article{welbl-etal-2018-constructing,
title = Constructing Datasets for Multi-hop Reading Comprehension Across Documents,
author = Welbl, Johannes and Stenetorp, Pontus and Riedel, Sebastian,
journal = Transactions of the Association for Computational Linguistics,
volume = 6,
year = 2018,
address = Cambridge, MA,
publisher = MIT Press,
url = https://aclanthology.org/Q18-1021,
doi = 10.1162/tacl_a_00021,
pages = 287--302,
abstract = {
Most Reading Comprehension methods limit themselves to queries which
can be answered using a single sentence, paragraph, or document.
Enabling models to combine disjoint pieces of textual evidence would
extend the scope of machine comprehension methods, but currently no
resources exist to train and test this capability. We propose a novel
task to encourage the development of models for text understanding
across multiple documents and to investigate the limits of existing
methods. In our task, a model learns to seek and combine evidence
-- effectively performing multihop, alias multi-step, inference.
We devise a methodology to produce datasets for this task, given a
collection of query-answer pairs and thematically linked documents.
Two datasets from different domains are induced, and we identify
potential pitfalls and devise circumvention strategies. We evaluate
two previously proposed competitive models and find that one can
integrate information across documents. However, both models
struggle to select relevant information; and providing documents
guaranteed to be relevant greatly improves their performance. While
the models outperform several strong baselines, their best accuracy
reaches 54.5 % on an annotated test set, compared to human
performance at 85.0 %, leaving ample room for improvement.
}
```
| bigbio/medhop | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-11-13T22:09:32+00:00 | {"language": ["en"], "license": "cc-by-sa-3.0", "multilinguality": "monolingual", "pretty_name": "MedHop", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_SA_3p0", "homepage": "http://qangaroo.cs.ucl.ac.uk/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["QUESTION_ANSWERING"]} | 2022-12-22T15:45:26+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us
|
# Dataset Card for MedHop
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: QA
With the same format as WikiHop, this dataset is based on research paper
abstracts from PubMed, and the queries are about interactions between
pairs of drugs. The correct answer has to be inferred by combining
information from a chain of reactions of drugs and proteins.
| [
"# Dataset Card for MedHop",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: QA\n\n\nWith the same format as WikiHop, this dataset is based on research paper\nabstracts from PubMed, and the queries are about interactions between\npairs of drugs. The correct answer has to be inferred by combining\ninformation from a chain of reactions of drugs and proteins."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us \n",
"# Dataset Card for MedHop",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: QA\n\n\nWith the same format as WikiHop, this dataset is based on research paper\nabstracts from PubMed, and the queries are about interactions between\npairs of drugs. The correct answer has to be inferred by combining\ninformation from a chain of reactions of drugs and proteins."
] |
a3a37bb61b661bcbc72ca1833836d0c128c26474 |
# Dataset Card for Medical Data
## Dataset Description
- **Homepage:**
- **Pubmed:** False
- **Public:** False
- **Tasks:** TE
This dataset is designed to do multiclass classification on medical drugs
## Citation Information
```
@misc{ask9medicaldata,
author = {Khan, Arbaaz},
title = {Sentiment Analysis for Medical Drugs},
year = {2019},
url = {https://www.kaggle.com/datasets/arbazkhan971/analyticvidhyadatasetsentiment},
}
```
| bigbio/medical_data | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:09:35+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "Medical Data", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TEXTUAL_ENTAILMENT"]} | 2022-12-22T15:45:28+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for Medical Data
## Dataset Description
- Homepage:
- Pubmed: False
- Public: False
- Tasks: TE
This dataset is designed to do multiclass classification on medical drugs
| [
"# Dataset Card for Medical Data",
"## Dataset Description\n\n- Homepage: \n- Pubmed: False\n- Public: False\n- Tasks: TE\n\n\n This dataset is designed to do multiclass classification on medical drugs"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for Medical Data",
"## Dataset Description\n\n- Homepage: \n- Pubmed: False\n- Public: False\n- Tasks: TE\n\n\n This dataset is designed to do multiclass classification on medical drugs"
] |
5a442c8e2f1a5ef0979930c16944d0cfa89b05f9 |
# Dataset Card for MEDIQA NLI
## Dataset Description
- **Homepage:** https://physionet.org/content/mednli-bionlp19/1.0.1/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TE
Natural Language Inference (NLI) is the task of determining whether a given hypothesis can be
inferred from a given premise. Also known as Recognizing Textual Entailment (RTE), this task has
enjoyed popularity among researchers for some time. However, almost all datasets for this task
focused on open domain data such as as news texts, blogs, and so on. To address this gap, the MedNLI
dataset was created for language inference in the medical domain. MedNLI is a derived dataset with
data sourced from MIMIC-III v1.4. In order to stimulate research for this problem, a shared task on
Medical Inference and Question Answering (MEDIQA) was organized at the workshop for biomedical
natural language processing (BioNLP) 2019. The dataset provided herein is a test set of 405 premise
hypothesis pairs for the NLI challenge in the MEDIQA shared task. Participants of the shared task
are expected to use the MedNLI data for development of their models and this dataset was used as an
unseen dataset for scoring each participant submission.
## Citation Information
```
@misc{https://doi.org/10.13026/gtv4-g455,
title = {MedNLI for Shared Task at ACL BioNLP 2019},
author = {Shivade, Chaitanya},
year = 2019,
publisher = {physionet.org},
doi = {10.13026/GTV4-G455},
url = {https://physionet.org/content/mednli-bionlp19/}
}
```
| bigbio/mediqa_nli | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:09:39+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "MEDIQA NLI", "bigbio_language": ["English"], "bigbio_license_shortname": "PHYSIONET_LICENSE_1p5", "homepage": "https://physionet.org/content/mednli-bionlp19/1.0.1/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TEXTUAL_ENTAILMENT"]} | 2022-12-22T15:45:31+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for MEDIQA NLI
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: TE
Natural Language Inference (NLI) is the task of determining whether a given hypothesis can be
inferred from a given premise. Also known as Recognizing Textual Entailment (RTE), this task has
enjoyed popularity among researchers for some time. However, almost all datasets for this task
focused on open domain data such as as news texts, blogs, and so on. To address this gap, the MedNLI
dataset was created for language inference in the medical domain. MedNLI is a derived dataset with
data sourced from MIMIC-III v1.4. In order to stimulate research for this problem, a shared task on
Medical Inference and Question Answering (MEDIQA) was organized at the workshop for biomedical
natural language processing (BioNLP) 2019. The dataset provided herein is a test set of 405 premise
hypothesis pairs for the NLI challenge in the MEDIQA shared task. Participants of the shared task
are expected to use the MedNLI data for development of their models and this dataset was used as an
unseen dataset for scoring each participant submission.
| [
"# Dataset Card for MEDIQA NLI",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TE\n\n\nNatural Language Inference (NLI) is the task of determining whether a given hypothesis can be\ninferred from a given premise. Also known as Recognizing Textual Entailment (RTE), this task has\nenjoyed popularity among researchers for some time. However, almost all datasets for this task\nfocused on open domain data such as as news texts, blogs, and so on. To address this gap, the MedNLI\ndataset was created for language inference in the medical domain. MedNLI is a derived dataset with\ndata sourced from MIMIC-III v1.4. In order to stimulate research for this problem, a shared task on\nMedical Inference and Question Answering (MEDIQA) was organized at the workshop for biomedical\nnatural language processing (BioNLP) 2019. The dataset provided herein is a test set of 405 premise\nhypothesis pairs for the NLI challenge in the MEDIQA shared task. Participants of the shared task\nare expected to use the MedNLI data for development of their models and this dataset was used as an\nunseen dataset for scoring each participant submission."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for MEDIQA NLI",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TE\n\n\nNatural Language Inference (NLI) is the task of determining whether a given hypothesis can be\ninferred from a given premise. Also known as Recognizing Textual Entailment (RTE), this task has\nenjoyed popularity among researchers for some time. However, almost all datasets for this task\nfocused on open domain data such as as news texts, blogs, and so on. To address this gap, the MedNLI\ndataset was created for language inference in the medical domain. MedNLI is a derived dataset with\ndata sourced from MIMIC-III v1.4. In order to stimulate research for this problem, a shared task on\nMedical Inference and Question Answering (MEDIQA) was organized at the workshop for biomedical\nnatural language processing (BioNLP) 2019. The dataset provided herein is a test set of 405 premise\nhypothesis pairs for the NLI challenge in the MEDIQA shared task. Participants of the shared task\nare expected to use the MedNLI data for development of their models and this dataset was used as an\nunseen dataset for scoring each participant submission."
] |
9288641f4c785c95dc9079fa526dabb12efdb041 |
# Dataset Card for MEDIQA QA
## Dataset Description
- **Homepage:** https://sites.google.com/view/mediqa2019
- **Pubmed:** False
- **Public:** True
- **Tasks:** QA
The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).
Mailing List: https://groups.google.com/forum/#!forum/bionlp-mediqa
In the QA task, participants are tasked to:
- filter/classify the provided answers (1: correct, 0: incorrect).
- re-rank the answers.
## Citation Information
```
@inproceedings{MEDIQA2019,
author = {Asma {Ben Abacha} and Chaitanya Shivade and Dina Demner{-}Fushman},
title = {Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering},
booktitle = {ACL-BioNLP 2019},
year = {2019}
}
```
| bigbio/mediqa_qa | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:09:42+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "MEDIQA QA", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://sites.google.com/view/mediqa2019", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["QUESTION_ANSWERING"]} | 2022-12-22T15:45:32+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for MEDIQA QA
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: QA
The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).
Mailing List: URL
In the QA task, participants are tasked to:
- filter/classify the provided answers (1: correct, 0: incorrect).
- re-rank the answers.
| [
"# Dataset Card for MEDIQA QA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: QA\n\n\nThe MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).\nMailing List: URL\n\nIn the QA task, participants are tasked to:\n- filter/classify the provided answers (1: correct, 0: incorrect).\n- re-rank the answers."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for MEDIQA QA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: QA\n\n\nThe MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).\nMailing List: URL\n\nIn the QA task, participants are tasked to:\n- filter/classify the provided answers (1: correct, 0: incorrect).\n- re-rank the answers."
] |
ba98e32191fc7e85a9ce6b88c691acf5cca32ad1 |
# Dataset Card for MEDIQA RQE
## Dataset Description
- **Homepage:** https://sites.google.com/view/mediqa2019
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXT2CLASS
The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).
Mailing List: https://groups.google.com/forum/#!forum/bionlp-mediqa
The objective of the RQE task is to identify entailment between two questions in the context of QA. We use the following definition of question entailment: “a question A entails a question B if every answer to B is also a complete or partial answer to A” [1]
[1] A. Ben Abacha & D. Demner-Fushman. “Recognizing Question Entailment for Medical Question Answering”. AMIA 2016.
## Citation Information
```
@inproceedings{MEDIQA2019,
author = {Asma {Ben Abacha} and Chaitanya Shivade and Dina Demner{-}Fushman},
title = {Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering},
booktitle = {ACL-BioNLP 2019},
year = {2019}
}
```
| bigbio/mediqa_rqe | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:09:46+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "MEDIQA RQE", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://sites.google.com/view/mediqa2019", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXT_PAIRS_CLASSIFICATION"]} | 2022-12-22T15:45:33+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for MEDIQA RQE
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TXT2CLASS
The MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).
Mailing List: URL
The objective of the RQE task is to identify entailment between two questions in the context of QA. We use the following definition of question entailment: “a question A entails a question B if every answer to B is also a complete or partial answer to A” [1]
[1] A. Ben Abacha & D. Demner-Fushman. “Recognizing Question Entailment for Medical Question Answering”. AMIA 2016.
| [
"# Dataset Card for MEDIQA RQE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXT2CLASS\n\n\nThe MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).\nMailing List: URL\n\nThe objective of the RQE task is to identify entailment between two questions in the context of QA. We use the following definition of question entailment: “a question A entails a question B if every answer to B is also a complete or partial answer to A” [1]\n [1] A. Ben Abacha & D. Demner-Fushman. “Recognizing Question Entailment for Medical Question Answering”. AMIA 2016."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for MEDIQA RQE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXT2CLASS\n\n\nThe MEDIQA challenge is an ACL-BioNLP 2019 shared task aiming to attract further research efforts in Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and their applications in medical Question Answering (QA).\nMailing List: URL\n\nThe objective of the RQE task is to identify entailment between two questions in the context of QA. We use the following definition of question entailment: “a question A entails a question B if every answer to B is also a complete or partial answer to A” [1]\n [1] A. Ben Abacha & D. Demner-Fushman. “Recognizing Question Entailment for Medical Question Answering”. AMIA 2016."
] |
c8de447c7fb7241ef37538644107a6a5e1d95e43 |
# Dataset Card for MedMentions
## Dataset Description
- **Homepage:** https://github.com/chanzuckerberg/MedMentions
- **Pubmed:** True
- **Public:** True
- **Tasks:** NED,NER
MedMentions is a new manually annotated resource for the recognition of biomedical concepts.
What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000
abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over
3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines.
Corpus: The MedMentions corpus consists of 4,392 papers (Titles and Abstracts) randomly selected
from among papers released on PubMed in 2016, that were in the biomedical field, published in
the English language, and had both a Title and an Abstract.
Annotators: We recruited a team of professional annotators with rich experience in biomedical
content curation to exhaustively annotate all UMLS® (2017AA full version) entity mentions in
these papers.
Annotation quality: We did not collect stringent IAA (Inter-annotator agreement) data. To gain
insight on the annotation quality of MedMentions, we randomly selected eight papers from the
annotated corpus, containing a total of 469 concepts. Two biologists ('Reviewer') who did not
participate in the annotation task then each reviewed four papers. The agreement between
Reviewers and Annotators, an estimate of the Precision of the annotations, was 97.3%.
## Citation Information
```
@misc{mohan2019medmentions,
title={MedMentions: A Large Biomedical Corpus Annotated with UMLS Concepts},
author={Sunil Mohan and Donghui Li},
year={2019},
eprint={1902.09476},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| bigbio/medmentions | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"arxiv:1902.09476",
"region:us"
] | 2022-11-13T22:09:49+00:00 | {"language": ["en"], "license": "cc0-1.0", "multilinguality": "monolingual", "pretty_name": "MedMentions", "bigbio_language": ["English"], "bigbio_license_shortname": "CC0_1p0", "homepage": "https://github.com/chanzuckerberg/MedMentions", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_DISAMBIGUATION", "NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:45:34+00:00 | [
"1902.09476"
] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc0-1.0 #arxiv-1902.09476 #region-us
|
# Dataset Card for MedMentions
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NED,NER
MedMentions is a new manually annotated resource for the recognition of biomedical concepts.
What distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000
abstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over
3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines.
Corpus: The MedMentions corpus consists of 4,392 papers (Titles and Abstracts) randomly selected
from among papers released on PubMed in 2016, that were in the biomedical field, published in
the English language, and had both a Title and an Abstract.
Annotators: We recruited a team of professional annotators with rich experience in biomedical
content curation to exhaustively annotate all UMLS® (2017AA full version) entity mentions in
these papers.
Annotation quality: We did not collect stringent IAA (Inter-annotator agreement) data. To gain
insight on the annotation quality of MedMentions, we randomly selected eight papers from the
annotated corpus, containing a total of 469 concepts. Two biologists ('Reviewer') who did not
participate in the annotation task then each reviewed four papers. The agreement between
Reviewers and Annotators, an estimate of the Precision of the annotations, was 97.3%.
| [
"# Dataset Card for MedMentions",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NED,NER\n\n\nMedMentions is a new manually annotated resource for the recognition of biomedical concepts.\nWhat distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000\nabstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over\n3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines.\n\nCorpus: The MedMentions corpus consists of 4,392 papers (Titles and Abstracts) randomly selected\nfrom among papers released on PubMed in 2016, that were in the biomedical field, published in\nthe English language, and had both a Title and an Abstract.\n\nAnnotators: We recruited a team of professional annotators with rich experience in biomedical\ncontent curation to exhaustively annotate all UMLS® (2017AA full version) entity mentions in\nthese papers.\n\nAnnotation quality: We did not collect stringent IAA (Inter-annotator agreement) data. To gain\ninsight on the annotation quality of MedMentions, we randomly selected eight papers from the\nannotated corpus, containing a total of 469 concepts. Two biologists ('Reviewer') who did not\nparticipate in the annotation task then each reviewed four papers. The agreement between\nReviewers and Annotators, an estimate of the Precision of the annotations, was 97.3%."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #arxiv-1902.09476 #region-us \n",
"# Dataset Card for MedMentions",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NED,NER\n\n\nMedMentions is a new manually annotated resource for the recognition of biomedical concepts.\nWhat distinguishes MedMentions from other annotated biomedical corpora is its size (over 4,000\nabstracts and over 350,000 linked mentions), as well as the size of the concept ontology (over\n3 million concepts from UMLS 2017) and its broad coverage of biomedical disciplines.\n\nCorpus: The MedMentions corpus consists of 4,392 papers (Titles and Abstracts) randomly selected\nfrom among papers released on PubMed in 2016, that were in the biomedical field, published in\nthe English language, and had both a Title and an Abstract.\n\nAnnotators: We recruited a team of professional annotators with rich experience in biomedical\ncontent curation to exhaustively annotate all UMLS® (2017AA full version) entity mentions in\nthese papers.\n\nAnnotation quality: We did not collect stringent IAA (Inter-annotator agreement) data. To gain\ninsight on the annotation quality of MedMentions, we randomly selected eight papers from the\nannotated corpus, containing a total of 469 concepts. Two biologists ('Reviewer') who did not\nparticipate in the annotation task then each reviewed four papers. The agreement between\nReviewers and Annotators, an estimate of the Precision of the annotations, was 97.3%."
] |
fe726c698a3fed11a9871dab00de553682808bd9 |
# Dataset Card for MeQSum
## Dataset Description
- **Homepage:** https://github.com/abachaa/MeQSum
- **Pubmed:** False
- **Public:** True
- **Tasks:** SUM
Dataset for medical question summarization introduced in the ACL 2019 paper "On the Summarization of Consumer Health
Questions". Question understanding is one of the main challenges in question answering. In real world applications,
users often submit natural language questions that are longer than needed and include peripheral information that
increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this
paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000
summarized consumer health questions.
## Citation Information
```
@inproceedings{ben-abacha-demner-fushman-2019-summarization,
title = "On the Summarization of Consumer Health Questions",
author = "Ben Abacha, Asma and
Demner-Fushman, Dina",
booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P19-1215",
doi = "10.18653/v1/P19-1215",
pages = "2228--2234",
abstract = "Question understanding is one of the main challenges in question answering. In real world applications, users often submit natural language questions that are longer than needed and include peripheral information that increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000 summarized consumer health questions. We explore data augmentation methods and evaluate state-of-the-art neural abstractive models on this new task. In particular, we show that semantic augmentation from question datasets improves the overall performance, and that pointer-generator networks outperform sequence-to-sequence attentional models on this task, with a ROUGE-1 score of 44.16{\%}. We also present a detailed error analysis and discuss directions for improvement that are specific to question summarization.",
}
```
| bigbio/meqsum | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:09:53+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "MeQSum", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/abachaa/MeQSum", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["SUMMARIZATION"]} | 2022-12-22T15:45:35+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for MeQSum
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: SUM
Dataset for medical question summarization introduced in the ACL 2019 paper "On the Summarization of Consumer Health
Questions". Question understanding is one of the main challenges in question answering. In real world applications,
users often submit natural language questions that are longer than needed and include peripheral information that
increases the complexity of the question, leading to substantially more false positives in answer retrieval. In this
paper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000
summarized consumer health questions.
| [
"# Dataset Card for MeQSum",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: SUM\n\n\nDataset for medical question summarization introduced in the ACL 2019 paper \"On the Summarization of Consumer Health\nQuestions\". Question understanding is one of the main challenges in question answering. In real world applications,\nusers often submit natural language questions that are longer than needed and include peripheral information that\nincreases the complexity of the question, leading to substantially more false positives in answer retrieval. In this\npaper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000\nsummarized consumer health questions."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for MeQSum",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: SUM\n\n\nDataset for medical question summarization introduced in the ACL 2019 paper \"On the Summarization of Consumer Health\nQuestions\". Question understanding is one of the main challenges in question answering. In real world applications,\nusers often submit natural language questions that are longer than needed and include peripheral information that\nincreases the complexity of the question, leading to substantially more false positives in answer retrieval. In this\npaper, we study neural abstractive models for medical question summarization. We introduce the MeQSum corpus of 1,000\nsummarized consumer health questions."
] |
887f91a8ff419e4e95b7d7ec027d25a53b72a8db |
# Dataset Card for MiniMayoSRS
## Dataset Description
- **Homepage:** https://conservancy.umn.edu/handle/11299/196265
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter-annotator agreement was
achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78.
## Citation Information
```
@article{pedersen2007measures,
title={Measures of semantic similarity and relatedness in the biomedical domain},
author={Pedersen, Ted and Pakhomov, Serguei VS and Patwardhan, Siddharth and Chute, Christopher G},
journal={Journal of biomedical informatics},
volume={40},
number={3},
pages={288--299},
year={2007},
publisher={Elsevier}
}
```
| bigbio/minimayosrs | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-11-13T22:09:56+00:00 | {"language": ["en"], "license": "cc0-1.0", "multilinguality": "monolingual", "pretty_name": "MiniMayoSRS", "bigbio_language": ["English"], "bigbio_license_shortname": "CC0_1p0", "homepage": "https://conservancy.umn.edu/handle/11299/196265", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]} | 2022-12-22T15:45:36+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for MiniMayoSRS
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: STS
MiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter-annotator agreement was
achieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78.
| [
"# Dataset Card for MiniMayoSRS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nMiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter-annotator agreement was\nachieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for MiniMayoSRS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nMiniMayoSRS is a subset of the MayoSRS and consists of 30 term pairs on which a higher inter-annotator agreement was\nachieved. The average correlation between physicians is 0.68. The average correlation between medical coders is 0.78."
] |
c4d7318c6392e85e034f40fd2f5aa37832509446 |
# Dataset Card for miRNA
## Dataset Description
- **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The corpus consists of 301 Medline citations. The documents were screened for
mentions of miRNA in the abstract text. Gene, disease and miRNA entities were manually
annotated. The corpus comprises of two separate files, a train and a test set, coming
from 201 and 100 documents respectively.
## Citation Information
```
@Article{Bagewadi2014,
author={Bagewadi, Shweta
and Bobi{'{c}}, Tamara
and Hofmann-Apitius, Martin
and Fluck, Juliane
and Klinger, Roman},
title={Detecting miRNA Mentions and Relations in Biomedical Literature},
journal={F1000Research},
year={2014},
month={Aug},
day={28},
publisher={F1000Research},
volume={3},
pages={205-205},
keywords={MicroRNAs; corpus; prediction algorithms},
abstract={
INTRODUCTION: MicroRNAs (miRNAs) have demonstrated their potential as post-transcriptional
gene expression regulators, participating in a wide spectrum of regulatory events such as
apoptosis, differentiation, and stress response. Apart from the role of miRNAs in normal
physiology, their dysregulation is implicated in a vast array of diseases. Dissection of
miRNA-related associations are valuable for contemplating their mechanism in diseases,
leading to the discovery of novel miRNAs for disease prognosis, diagnosis, and therapy.
MOTIVATION: Apart from databases and prediction tools, miRNA-related information is largely
available as unstructured text. Manual retrieval of these associations can be labor-intensive
due to steadily growing number of publications. Additionally, most of the published miRNA
entity recognition methods are keyword based, further subjected to manual inspection for
retrieval of relations. Despite the fact that several databases host miRNA-associations
derived from text, lower sensitivity and lack of published details for miRNA entity
recognition and associated relations identification has motivated the need for developing
comprehensive methods that are freely available for the scientific community. Additionally,
the lack of a standard corpus for miRNA-relations has caused difficulty in evaluating the
available systems. We propose methods to automatically extract mentions of miRNAs, species,
genes/proteins, disease, and relations from scientific literature. Our generated corpora,
along with dictionaries, and miRNA regular expression are freely available for academic
purposes. To our knowledge, these resources are the most comprehensive developed so far.
RESULTS: The identification of specific miRNA mentions reaches a recall of 0.94 and
precision of 0.93. Extraction of miRNA-disease and miRNA-gene relations lead to an
F1 score of up to 0.76. A comparison of the information extracted by our approach to
the databases miR2Disease and miRSel for the extraction of Alzheimer's disease
related relations shows the capability of our proposed methods in identifying correct
relations with improved sensitivity. The published resources and described methods can
help the researchers for maximal retrieval of miRNA-relations and generation of
miRNA-regulatory networks. AVAILABILITY: The training and test corpora, annotation
guidelines, developed dictionaries, and supplementary files are available at
http://www.scai.fraunhofer.de/mirna-corpora.html.
},
note={26535109[pmid]},
note={PMC4602280[pmcid]},
issn={2046-1402},
url={https://pubmed.ncbi.nlm.nih.gov/26535109},
language={eng}
}
```
| bigbio/mirna | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-3.0",
"region:us"
] | 2022-11-13T22:10:00+00:00 | {"language": ["en"], "license": "cc-by-nc-3.0", "multilinguality": "monolingual", "pretty_name": "miRNA", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_NC_3p0", "homepage": "https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:45:38+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-nc-3.0 #region-us
|
# Dataset Card for miRNA
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
The corpus consists of 301 Medline citations. The documents were screened for
mentions of miRNA in the abstract text. Gene, disease and miRNA entities were manually
annotated. The corpus comprises of two separate files, a train and a test set, coming
from 201 and 100 documents respectively.
| [
"# Dataset Card for miRNA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe corpus consists of 301 Medline citations. The documents were screened for\nmentions of miRNA in the abstract text. Gene, disease and miRNA entities were manually\nannotated. The corpus comprises of two separate files, a train and a test set, coming\nfrom 201 and 100 documents respectively."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-nc-3.0 #region-us \n",
"# Dataset Card for miRNA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe corpus consists of 301 Medline citations. The documents were screened for\nmentions of miRNA in the abstract text. Gene, disease and miRNA entities were manually\nannotated. The corpus comprises of two separate files, a train and a test set, coming\nfrom 201 and 100 documents respectively."
] |
e83ef1dba6092f84581c3d6a57b72bcec86811ac |
# Dataset Card for MLEE
## Dataset Description
- **Homepage:** http://www.nactem.ac.uk/MLEE/
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,RE,COREF
MLEE is an event extraction corpus consisting of manually annotated abstracts of papers
on angiogenesis. It contains annotations for entities, relations, events and coreferences
The annotations span molecular, cellular, tissue, and organ-level processes.
## Citation Information
```
@article{pyysalo2012event,
title={Event extraction across multiple levels of biological organization},
author={Pyysalo, Sampo and Ohta, Tomoko and Miwa, Makoto and Cho, Han-Cheol and Tsujii, Jun'ichi and Ananiadou, Sophia},
journal={Bioinformatics},
volume={28},
number={18},
pages={i575--i581},
year={2012},
publisher={Oxford University Press}
}
```
| bigbio/mlee | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-3.0",
"region:us"
] | 2022-11-13T22:10:03+00:00 | {"language": ["en"], "license": "cc-by-nc-sa-3.0", "multilinguality": "monolingual", "pretty_name": "MLEE", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_NC_SA_3p0", "homepage": "http://www.nactem.ac.uk/MLEE/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["EVENT_EXTRACTION", "NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:45:39+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-nc-sa-3.0 #region-us
|
# Dataset Card for MLEE
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: EE,NER,RE,COREF
MLEE is an event extraction corpus consisting of manually annotated abstracts of papers
on angiogenesis. It contains annotations for entities, relations, events and coreferences
The annotations span molecular, cellular, tissue, and organ-level processes.
| [
"# Dataset Card for MLEE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,RE,COREF\n\n\nMLEE is an event extraction corpus consisting of manually annotated abstracts of papers\non angiogenesis. It contains annotations for entities, relations, events and coreferences\nThe annotations span molecular, cellular, tissue, and organ-level processes."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-nc-sa-3.0 #region-us \n",
"# Dataset Card for MLEE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,RE,COREF\n\n\nMLEE is an event extraction corpus consisting of manually annotated abstracts of papers\non angiogenesis. It contains annotations for entities, relations, events and coreferences\nThe annotations span molecular, cellular, tissue, and organ-level processes."
] |
408873bbb53acf9c97a578dc7e8faabed48f4ca2 |
# Dataset Card for MQP
## Dataset Description
- **Homepage:** https://github.com/curai/medical-question-pair-dataset
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
Medical Question Pairs dataset by McCreery et al (2020) contains pairs of medical questions and paraphrased versions of
the question prepared by medical professional. Paraphrased versions were labelled as similar (syntactically dissimilar
but contextually similar ) or dissimilar (syntactically may look similar but contextually dissimilar). Labels 1: similar, 0: dissimilar
## Citation Information
```
@article{DBLP:journals/biodb/LiSJSWLDMWL16,
author = {Krallinger, M., Rabal, O., Lourenço, A.},
title = {Effective Transfer Learning for Identifying Similar Questions: Matching User Questions to COVID-19 FAQs},
journal = {KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining},
volume = {3458–3465},
year = {2020},
url = {https://github.com/curai/medical-question-pair-dataset},
doi = {},
biburl = {},
bibsource = {}
}
```
| bigbio/mqp | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:10:07+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "MQP", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/curai/medical-question-pair-dataset", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]} | 2022-12-22T15:45:40+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for MQP
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: STS
Medical Question Pairs dataset by McCreery et al (2020) contains pairs of medical questions and paraphrased versions of
the question prepared by medical professional. Paraphrased versions were labelled as similar (syntactically dissimilar
but contextually similar ) or dissimilar (syntactically may look similar but contextually dissimilar). Labels 1: similar, 0: dissimilar
| [
"# Dataset Card for MQP",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nMedical Question Pairs dataset by McCreery et al (2020) contains pairs of medical questions and paraphrased versions of \nthe question prepared by medical professional. Paraphrased versions were labelled as similar (syntactically dissimilar \nbut contextually similar ) or dissimilar (syntactically may look similar but contextually dissimilar). Labels 1: similar, 0: dissimilar"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for MQP",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nMedical Question Pairs dataset by McCreery et al (2020) contains pairs of medical questions and paraphrased versions of \nthe question prepared by medical professional. Paraphrased versions were labelled as similar (syntactically dissimilar \nbut contextually similar ) or dissimilar (syntactically may look similar but contextually dissimilar). Labels 1: similar, 0: dissimilar"
] |
b88df31cef04a90eae9e512d1de2b58104659252 |
# Dataset Card for MSH WSD
## Dataset Description
- **Homepage:** https://lhncbc.nlm.nih.gov/ii/areas/WSD/collaboration.html
- **Pubmed:** True
- **Public:** False
- **Tasks:** NED
Evaluation of Word Sense Disambiguation methods (WSD) in the biomedical domain is difficult because the available
resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We have
developed a method that can be used to automatically develop a WSD test collection using the Unified Medical Language
System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. The resulting dataset is called MSH WSD and
consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203
ambiguous words. Each instance containing the ambiguous word was assigned a CUI from the 2009AB version of the UMLS.
For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from
MEDLINE; totaling 37,888 ambiguity cases in 37,090 MEDLINE citations.
## Citation Information
```
@article{jimeno2011exploiting,
title={Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation},
author={Jimeno-Yepes, Antonio J and McInnes, Bridget T and Aronson, Alan R},
journal={BMC bioinformatics},
volume={12},
number={1},
pages={1--14},
year={2011},
publisher={BioMed Central}
}
```
| bigbio/msh_wsd | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:11+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "MSH WSD", "bigbio_language": ["English"], "bigbio_license_shortname": "UMLS_LICENSE", "homepage": "https://lhncbc.nlm.nih.gov/ii/areas/WSD/collaboration.html", "bigbio_pubmed": true, "bigbio_public": false, "bigbio_tasks": ["NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:45:41+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for MSH WSD
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: False
- Tasks: NED
Evaluation of Word Sense Disambiguation methods (WSD) in the biomedical domain is difficult because the available
resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We have
developed a method that can be used to automatically develop a WSD test collection using the Unified Medical Language
System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. The resulting dataset is called MSH WSD and
consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203
ambiguous words. Each instance containing the ambiguous word was assigned a CUI from the 2009AB version of the UMLS.
For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from
MEDLINE; totaling 37,888 ambiguity cases in 37,090 MEDLINE citations.
| [
"# Dataset Card for MSH WSD",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: NED\n\n\nEvaluation of Word Sense Disambiguation methods (WSD) in the biomedical domain is difficult because the available\nresources are either too small or too focused on specific types of entities (e.g. diseases or genes). We have\ndeveloped a method that can be used to automatically develop a WSD test collection using the Unified Medical Language\nSystem (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. The resulting dataset is called MSH WSD and\nconsists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203\nambiguous words. Each instance containing the ambiguous word was assigned a CUI from the 2009AB version of the UMLS.\nFor each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from\nMEDLINE; totaling 37,888 ambiguity cases in 37,090 MEDLINE citations."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for MSH WSD",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: NED\n\n\nEvaluation of Word Sense Disambiguation methods (WSD) in the biomedical domain is difficult because the available\nresources are either too small or too focused on specific types of entities (e.g. diseases or genes). We have\ndeveloped a method that can be used to automatically develop a WSD test collection using the Unified Medical Language\nSystem (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. The resulting dataset is called MSH WSD and\nconsists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203\nambiguous words. Each instance containing the ambiguous word was assigned a CUI from the 2009AB version of the UMLS.\nFor each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from\nMEDLINE; totaling 37,888 ambiguity cases in 37,090 MEDLINE citations."
] |
a2320cc61201b439153f2c3c7ae42ee09db79c19 |
# Dataset Card for MuchMore
## Dataset Description
- **Homepage:** https://muchmore.dfki.de/resources1.htm
- **Pubmed:** True
- **Public:** True
- **Tasks:** TRANSL,NER,NED,RE
The corpus used in the MuchMore project is a parallel corpus of English-German scientific
medical abstracts obtained from the Springer Link web site. The corpus consists
approximately of 1 million tokens for each language. Abstracts are from 41 medical
journals, each of which constitutes a relatively homogeneous medical sub-domain (e.g.
Neurology, Radiology, etc.). The corpus of downloaded HTML documents is normalized in
various ways, in order to produce a clean, plain text version, consisting of a title, abstract
and keywords. Additionally, the corpus was aligned on the sentence level.
Automatic (!) annotation includes: Part-of-Speech; Morphology (inflection and
decomposition); Chunks; Semantic Classes (UMLS: Unified Medical Language System,
MeSH: Medical Subject Headings, EuroWordNet); Semantic Relations from UMLS.
## Citation Information
```
@inproceedings{buitelaar2003multi,
title={A multi-layered, xml-based approach to the integration of linguistic and semantic annotations},
author={Buitelaar, Paul and Declerck, Thierry and Sacaleanu, Bogdan and Vintar, {{S}}pela and Raileanu, Diana and Crispi, Claudia},
booktitle={Proceedings of EACL 2003 Workshop on Language Technology and the Semantic Web (NLPXML'03), Budapest, Hungary},
year={2003}
}
```
| bigbio/muchmore | [
"multilinguality:multilingual",
"language:en",
"language:de",
"license:unknown",
"region:us"
] | 2022-11-13T22:10:14+00:00 | {"language": ["en", "de"], "license": "unknown", "multilinguality": "multilingual", "pretty_name": "MuchMore", "bigbio_language": ["English", "German"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://muchmore.dfki.de/resources1.htm", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["TRANSLATION", "NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION", "RELATION_EXTRACTION"]} | 2022-12-22T15:45:43+00:00 | [] | [
"en",
"de"
] | TAGS
#multilinguality-multilingual #language-English #language-German #license-unknown #region-us
|
# Dataset Card for MuchMore
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: TRANSL,NER,NED,RE
The corpus used in the MuchMore project is a parallel corpus of English-German scientific
medical abstracts obtained from the Springer Link web site. The corpus consists
approximately of 1 million tokens for each language. Abstracts are from 41 medical
journals, each of which constitutes a relatively homogeneous medical sub-domain (e.g.
Neurology, Radiology, etc.). The corpus of downloaded HTML documents is normalized in
various ways, in order to produce a clean, plain text version, consisting of a title, abstract
and keywords. Additionally, the corpus was aligned on the sentence level.
Automatic (!) annotation includes: Part-of-Speech; Morphology (inflection and
decomposition); Chunks; Semantic Classes (UMLS: Unified Medical Language System,
MeSH: Medical Subject Headings, EuroWordNet); Semantic Relations from UMLS.
| [
"# Dataset Card for MuchMore",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: TRANSL,NER,NED,RE\n\n\nThe corpus used in the MuchMore project is a parallel corpus of English-German scientific\nmedical abstracts obtained from the Springer Link web site. The corpus consists\napproximately of 1 million tokens for each language. Abstracts are from 41 medical\njournals, each of which constitutes a relatively homogeneous medical sub-domain (e.g.\nNeurology, Radiology, etc.). The corpus of downloaded HTML documents is normalized in\nvarious ways, in order to produce a clean, plain text version, consisting of a title, abstract\nand keywords. Additionally, the corpus was aligned on the sentence level.\n\nAutomatic (!) annotation includes: Part-of-Speech; Morphology (inflection and\ndecomposition); Chunks; Semantic Classes (UMLS: Unified Medical Language System,\nMeSH: Medical Subject Headings, EuroWordNet); Semantic Relations from UMLS."
] | [
"TAGS\n#multilinguality-multilingual #language-English #language-German #license-unknown #region-us \n",
"# Dataset Card for MuchMore",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: TRANSL,NER,NED,RE\n\n\nThe corpus used in the MuchMore project is a parallel corpus of English-German scientific\nmedical abstracts obtained from the Springer Link web site. The corpus consists\napproximately of 1 million tokens for each language. Abstracts are from 41 medical\njournals, each of which constitutes a relatively homogeneous medical sub-domain (e.g.\nNeurology, Radiology, etc.). The corpus of downloaded HTML documents is normalized in\nvarious ways, in order to produce a clean, plain text version, consisting of a title, abstract\nand keywords. Additionally, the corpus was aligned on the sentence level.\n\nAutomatic (!) annotation includes: Part-of-Speech; Morphology (inflection and\ndecomposition); Chunks; Semantic Classes (UMLS: Unified Medical Language System,\nMeSH: Medical Subject Headings, EuroWordNet); Semantic Relations from UMLS."
] |
8af2dfea3dcd03e255ca90baa621ef188cd6ecb8 |
# Dataset Card for Multi-XScience
## Dataset Description
- **Homepage:** https://github.com/yaolu/Multi-XScience
- **Pubmed:** False
- **Public:** True
- **Tasks:** PARA,SUM
Multi-document summarization is a challenging task for which there exists little large-scale datasets.
We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles.
Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section
of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization,
a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and
empirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t
hat Multi-XScience is well suited for abstractive models.
## Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2010.14235,
doi = {10.48550/ARXIV.2010.14235},
url = {https://arxiv.org/abs/2010.14235},
author = {Lu, Yao and Dong, Yue and Charlin, Laurent},
keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles},
publisher = {arXiv},
year = {2020},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| bigbio/multi_xscience | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"arxiv:2010.14235",
"region:us"
] | 2022-11-13T22:10:18+00:00 | {"language": ["en"], "license": "mit", "multilinguality": "monolingual", "pretty_name": "Multi-XScience", "bigbio_language": ["English"], "bigbio_license_shortname": "MIT", "homepage": "https://github.com/yaolu/Multi-XScience", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["PARAPHRASING", "SUMMARIZATION"]} | 2022-12-22T15:45:44+00:00 | [
"2010.14235"
] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-mit #arxiv-2010.14235 #region-us
|
# Dataset Card for Multi-XScience
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: PARA,SUM
Multi-document summarization is a challenging task for which there exists little large-scale datasets.
We propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles.
Multi-XScience introduces a challenging multi-document summarization task: writing the related-work section
of a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization,
a dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and
empirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t
hat Multi-XScience is well suited for abstractive models.
| [
"# Dataset Card for Multi-XScience",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: PARA,SUM\n\n\nMulti-document summarization is a challenging task for which there exists little large-scale datasets. \nWe propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles. \nMulti-XScience introduces a challenging multi-document summarization task: writing the related-work section \nof a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization, \na dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and \nempirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t\nhat Multi-XScience is well suited for abstractive models."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-mit #arxiv-2010.14235 #region-us \n",
"# Dataset Card for Multi-XScience",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: PARA,SUM\n\n\nMulti-document summarization is a challenging task for which there exists little large-scale datasets. \nWe propose Multi-XScience, a large-scale multi-document summarization dataset created from scientific articles. \nMulti-XScience introduces a challenging multi-document summarization task: writing the related-work section \nof a paper based on its abstract and the articles it references. Our work is inspired by extreme summarization, \na dataset construction protocol that favours abstractive modeling approaches. Descriptive statistics and \nempirical results---using several state-of-the-art models trained on the Multi-XScience dataset---reveal t\nhat Multi-XScience is well suited for abstractive models."
] |
4232a5821581785e9ee71763a31d7018c2c8c8e4 |
# Dataset Card for n2c2 2006 De-identification
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER
The data for the de-identification challenge came from Partners Healthcare and
included solely medical discharge summaries. We prepared the data for the
challengeby annotating and by replacing all authentic PHI with realistic
surrogates.
Given the above definitions, we marked the authentic PHI in the records in two stages.
In the first stage, we used an automatic system.31 In the second stage, we validated
the output of the automatic system manually. Three annotators, including undergraduate
and graduate students and a professor, serially made three manual passes over each record.
They marked and discussed the PHI tags they disagreed on and finalized these tags
after discussion.
The original dataset does not have spans for each entity. The spans are
computed in this loader and the final text that correspond with the
tags is preserved in the source format
## Citation Information
```
@article{uzuner2007evaluating,
author = {
Uzuner, Özlem and
Luo, Yuan and
Szolovits, Peter
},
title = {Evaluating the State-of-the-Art in Automatic De-identification},
journal = {Journal of the American Medical Informatics Association},
volume = {14},
number = {5},
pages = {550-563},
year = {2007},
month = {09},
url = {https://doi.org/10.1197/jamia.M2444},
doi = {10.1197/jamia.M2444},
eprint = {https://academic.oup.com/jamia/article-pdf/14/5/550/2136261/14-5-550.pdf}
}
```
| bigbio/n2c2_2006_deid | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:21+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "n2c2 2006 De-identification", "bigbio_language": ["English"], "bigbio_license_shortname": "DUA", "homepage": "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:45:45+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for n2c2 2006 De-identification
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: NER
The data for the de-identification challenge came from Partners Healthcare and
included solely medical discharge summaries. We prepared the data for the
challengeby annotating and by replacing all authentic PHI with realistic
surrogates.
Given the above definitions, we marked the authentic PHI in the records in two stages.
In the first stage, we used an automatic system.31 In the second stage, we validated
the output of the automatic system manually. Three annotators, including undergraduate
and graduate students and a professor, serially made three manual passes over each record.
They marked and discussed the PHI tags they disagreed on and finalized these tags
after discussion.
The original dataset does not have spans for each entity. The spans are
computed in this loader and the final text that correspond with the
tags is preserved in the source format
| [
"# Dataset Card for n2c2 2006 De-identification",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER\n\n\nThe data for the de-identification challenge came from Partners Healthcare and\nincluded solely medical discharge summaries. We prepared the data for the\nchallengeby annotating and by replacing all authentic PHI with realistic\nsurrogates.\n\nGiven the above definitions, we marked the authentic PHI in the records in two stages.\nIn the first stage, we used an automatic system.31 In the second stage, we validated\nthe output of the automatic system manually. Three annotators, including undergraduate\nand graduate students and a professor, serially made three manual passes over each record.\nThey marked and discussed the PHI tags they disagreed on and finalized these tags\nafter discussion.\n\nThe original dataset does not have spans for each entity. The spans are\ncomputed in this loader and the final text that correspond with the\ntags is preserved in the source format"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for n2c2 2006 De-identification",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER\n\n\nThe data for the de-identification challenge came from Partners Healthcare and\nincluded solely medical discharge summaries. We prepared the data for the\nchallengeby annotating and by replacing all authentic PHI with realistic\nsurrogates.\n\nGiven the above definitions, we marked the authentic PHI in the records in two stages.\nIn the first stage, we used an automatic system.31 In the second stage, we validated\nthe output of the automatic system manually. Three annotators, including undergraduate\nand graduate students and a professor, serially made three manual passes over each record.\nThey marked and discussed the PHI tags they disagreed on and finalized these tags\nafter discussion.\n\nThe original dataset does not have spans for each entity. The spans are\ncomputed in this loader and the final text that correspond with the\ntags is preserved in the source format"
] |
da1be8724e064d46d36eb8ea51dc910eedb2a864 |
# Dataset Card for n2c2 2006 Smoking Status
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TXTCLASS
The data for the n2c2 2006 smoking challenge consisted of discharge summaries
from Partners HealthCare, which were then de-identified, tokenized, broken into
sentences, converted into XML format, and separated into training and test sets.
Two pulmonologists annotated each record with the smoking status of patients based
strictly on the explicitly stated smoking-related facts in the records. These
annotations constitute the textual judgments of the annotators. The annotators
were asked to classify patient records into five possible smoking status categories:
a past smoker, a current smoker, a smoker, a non-smoker and an unknown. A total of
502 de-identified medical discharge records were used for the smoking challenge.
## Citation Information
```
@article{uzuner2008identifying,
author = {
Uzuner, Ozlem and
Goldstein, Ira and
Luo, Yuan and
Kohane, Isaac
},
title = {Identifying Patient Smoking Status from Medical Discharge Records},
journal = {Journal of the American Medical Informatics Association},
volume = {15},
number = {1},
pages = {14-24},
year = {2008},
month = {01},
url = {https://doi.org/10.1197/jamia.M2408},
doi = {10.1136/amiajnl-2011-000784},
eprint = {https://academic.oup.com/jamia/article-pdf/15/1/14/2339646/15-1-14.pdf}
}
```
| bigbio/n2c2_2006_smokers | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:24+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "n2c2 2006 Smoking Status", "bigbio_language": ["English"], "bigbio_license_shortname": "DUA", "homepage": "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:45:46+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for n2c2 2006 Smoking Status
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: TXTCLASS
The data for the n2c2 2006 smoking challenge consisted of discharge summaries
from Partners HealthCare, which were then de-identified, tokenized, broken into
sentences, converted into XML format, and separated into training and test sets.
Two pulmonologists annotated each record with the smoking status of patients based
strictly on the explicitly stated smoking-related facts in the records. These
annotations constitute the textual judgments of the annotators. The annotators
were asked to classify patient records into five possible smoking status categories:
a past smoker, a current smoker, a smoker, a non-smoker and an unknown. A total of
502 de-identified medical discharge records were used for the smoking challenge.
| [
"# Dataset Card for n2c2 2006 Smoking Status",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nThe data for the n2c2 2006 smoking challenge consisted of discharge summaries\nfrom Partners HealthCare, which were then de-identified, tokenized, broken into\nsentences, converted into XML format, and separated into training and test sets.\n\nTwo pulmonologists annotated each record with the smoking status of patients based\nstrictly on the explicitly stated smoking-related facts in the records. These\nannotations constitute the textual judgments of the annotators. The annotators\nwere asked to classify patient records into five possible smoking status categories:\na past smoker, a current smoker, a smoker, a non-smoker and an unknown. A total of\n502 de-identified medical discharge records were used for the smoking challenge."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for n2c2 2006 Smoking Status",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nThe data for the n2c2 2006 smoking challenge consisted of discharge summaries\nfrom Partners HealthCare, which were then de-identified, tokenized, broken into\nsentences, converted into XML format, and separated into training and test sets.\n\nTwo pulmonologists annotated each record with the smoking status of patients based\nstrictly on the explicitly stated smoking-related facts in the records. These\nannotations constitute the textual judgments of the annotators. The annotators\nwere asked to classify patient records into five possible smoking status categories:\na past smoker, a current smoker, a smoker, a non-smoker and an unknown. A total of\n502 de-identified medical discharge records were used for the smoking challenge."
] |
494533b7efb87d7cd57fc3b26e8acc6bfccafa3b |
# Dataset Card for n2c2 2008 Obesity
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** True
- **Public:** False
- **Tasks:** TXTCLASS
The data for the n2c2 2008 obesity challenge consisted of discharge summaries from
the Partners HealthCare Research Patient Data Repository. These data were chosen
from the discharge summaries of patients who were overweight or diabetic and had
been hospitalized for obesity or diabetes sometime since 12/1/04. De-identification
was performed semi-automatically. All private health information was replaced with
synthetic identifiers.
The data for the challenge were annotated by two obesity experts from the
Massachusetts General Hospital Weight Center. The experts were given a textual task,
which asked them to classify each disease (see list of diseases above) as Present,
Absent, Questionable, or Unmentioned based on explicitly documented information in
the discharge summaries, e.g., the statement “the patient is obese”. The experts were
also given an intuitive task, which asked them to classify each disease as Present,
Absent, or Questionable by applying their intuition and judgment to information in
the discharge summaries.
## Citation Information
```
@article{uzuner2009recognizing,
author = {
Uzuner, Ozlem
},
title = {Recognizing Obesity and Comorbidities in Sparse Data},
journal = {Journal of the American Medical Informatics Association},
volume = {16},
number = {4},
pages = {561-570},
year = {2009},
month = {07},
url = {https://doi.org/10.1197/jamia.M3115},
doi = {10.1197/jamia.M3115},
eprint = {https://academic.oup.com/jamia/article-pdf/16/4/561/2302602/16-4-561.pdf}
}
```
| bigbio/n2c2_2008 | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:28+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "n2c2 2008 Obesity", "bigbio_language": ["English"], "bigbio_license_shortname": "DUA", "homepage": "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/", "bigbio_pubmed": true, "bigbio_public": false, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:45:48+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for n2c2 2008 Obesity
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: False
- Tasks: TXTCLASS
The data for the n2c2 2008 obesity challenge consisted of discharge summaries from
the Partners HealthCare Research Patient Data Repository. These data were chosen
from the discharge summaries of patients who were overweight or diabetic and had
been hospitalized for obesity or diabetes sometime since 12/1/04. De-identification
was performed semi-automatically. All private health information was replaced with
synthetic identifiers.
The data for the challenge were annotated by two obesity experts from the
Massachusetts General Hospital Weight Center. The experts were given a textual task,
which asked them to classify each disease (see list of diseases above) as Present,
Absent, Questionable, or Unmentioned based on explicitly documented information in
the discharge summaries, e.g., the statement “the patient is obese”. The experts were
also given an intuitive task, which asked them to classify each disease as Present,
Absent, or Questionable by applying their intuition and judgment to information in
the discharge summaries.
| [
"# Dataset Card for n2c2 2008 Obesity",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: TXTCLASS\n\n\nThe data for the n2c2 2008 obesity challenge consisted of discharge summaries from\nthe Partners HealthCare Research Patient Data Repository. These data were chosen \nfrom the discharge summaries of patients who were overweight or diabetic and had \nbeen hospitalized for obesity or diabetes sometime since 12/1/04. De-identification\nwas performed semi-automatically. All private health information was replaced with\nsynthetic identifiers.\n\nThe data for the challenge were annotated by two obesity experts from the \nMassachusetts General Hospital Weight Center. The experts were given a textual task, \nwhich asked them to classify each disease (see list of diseases above) as Present, \nAbsent, Questionable, or Unmentioned based on explicitly documented information in \nthe discharge summaries, e.g., the statement “the patient is obese”. The experts were \nalso given an intuitive task, which asked them to classify each disease as Present, \nAbsent, or Questionable by applying their intuition and judgment to information in \nthe discharge summaries."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for n2c2 2008 Obesity",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: TXTCLASS\n\n\nThe data for the n2c2 2008 obesity challenge consisted of discharge summaries from\nthe Partners HealthCare Research Patient Data Repository. These data were chosen \nfrom the discharge summaries of patients who were overweight or diabetic and had \nbeen hospitalized for obesity or diabetes sometime since 12/1/04. De-identification\nwas performed semi-automatically. All private health information was replaced with\nsynthetic identifiers.\n\nThe data for the challenge were annotated by two obesity experts from the \nMassachusetts General Hospital Weight Center. The experts were given a textual task, \nwhich asked them to classify each disease (see list of diseases above) as Present, \nAbsent, Questionable, or Unmentioned based on explicitly documented information in \nthe discharge summaries, e.g., the statement “the patient is obese”. The experts were \nalso given an intuitive task, which asked them to classify each disease as Present, \nAbsent, or Questionable by applying their intuition and judgment to information in \nthe discharge summaries."
] |
b07981860e51299c11415f88b69275e19193aa2b |
# Dataset Card for n2c2 2009 Medications
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** True
- **Public:** False
- **Tasks:** NER
The Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records
focused on the identification of medications, their dosages, modes (routes) of administration,
frequencies, durations, and reasons for administration in discharge summaries.
The third i2b2 challenge—that is, the medication challenge—extends information
extraction to relation extraction; it requires extraction of medications and
medication-related information followed by determination of which medication
belongs to which medication-related details.
The medication challenge was designed as an information extraction task.
The goal, for each discharge summary, was to extract the following information
on medications experienced by the patient:
1. Medications (m): including names, brand names, generics, and collective names of prescription substances,
over the counter medications, and other biological substances for which the patient is the experiencer.
2. Dosages (do): indicating the amount of a medication used in each administration.
3. Modes (mo): indicating the route for administering the medication.
4. Frequencies (f): indicating how often each dose of the medication should be taken.
5. Durations (du): indicating how long the medication is to be administered.
6. Reasons (r): stating the medical reason for which the medication is given.
7. Certainty (c): stating whether the event occurs. Certainty can be expressed by uncertainty words,
e.g., “suggested”, or via modals, e.g., “should” indicates suggestion.
8. Event (e): stating on whether the medication is started, stopped, or continued.
9. Temporal (t): stating whether the medication was administered in the past,
is being administered currently, or will be administered in the future, to the extent
that this information is expressed in the tense of the verbs and auxiliary verbs used to express events.
10. List/narrative (ln): indicating whether the medication information appears in a
list structure or in narrative running text in the discharge summary.
The medication challenge asked that systems extract the text corresponding to each of the fields
for each of the mentions of the medications that were experienced by the patients.
The values for the set of fields related to a medication mention, if presented within a
two-line window of the mention, were linked in order to create what we defined as an ‘entry’.
If the value of a field for a mention were not specified within a two-line window,
then the value ‘nm’ for ‘not mentioned’ was entered and the offsets were left unspecified.
Since the dataset annotations were crowd-sourced, it contains various violations that are handled
throughout the data loader via means of exception catching or conditional statements. e.g.
annotation: anticoagulation, while in text all words are to be separated by space which
means words at end of sentence will always contain `.` and hence won't be an exact match
i.e. `anticoagulation` != `anticoagulation.` from doc_id: 818404
## Citation Information
```
@article{DBLP:journals/jamia/UzunerSC10,
author = {
Ozlem Uzuner and
Imre Solti and
Eithon Cadag
},
title = {Extracting medication information from clinical text},
journal = {J. Am. Medical Informatics Assoc.},
volume = {17},
number = {5},
pages = {514--518},
year = {2010},
url = {https://doi.org/10.1136/jamia.2010.003947},
doi = {10.1136/jamia.2010.003947},
timestamp = {Mon, 11 May 2020 22:59:55 +0200},
biburl = {https://dblp.org/rec/journals/jamia/UzunerSC10.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| bigbio/n2c2_2009 | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:31+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "n2c2 2009 Medications", "bigbio_language": ["English"], "bigbio_license_shortname": "DUA", "homepage": "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/", "bigbio_pubmed": true, "bigbio_public": false, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:45:50+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for n2c2 2009 Medications
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: False
- Tasks: NER
The Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records
focused on the identification of medications, their dosages, modes (routes) of administration,
frequencies, durations, and reasons for administration in discharge summaries.
The third i2b2 challenge—that is, the medication challenge—extends information
extraction to relation extraction; it requires extraction of medications and
medication-related information followed by determination of which medication
belongs to which medication-related details.
The medication challenge was designed as an information extraction task.
The goal, for each discharge summary, was to extract the following information
on medications experienced by the patient:
1. Medications (m): including names, brand names, generics, and collective names of prescription substances,
over the counter medications, and other biological substances for which the patient is the experiencer.
2. Dosages (do): indicating the amount of a medication used in each administration.
3. Modes (mo): indicating the route for administering the medication.
4. Frequencies (f): indicating how often each dose of the medication should be taken.
5. Durations (du): indicating how long the medication is to be administered.
6. Reasons (r): stating the medical reason for which the medication is given.
7. Certainty (c): stating whether the event occurs. Certainty can be expressed by uncertainty words,
e.g., “suggested”, or via modals, e.g., “should” indicates suggestion.
8. Event (e): stating on whether the medication is started, stopped, or continued.
9. Temporal (t): stating whether the medication was administered in the past,
is being administered currently, or will be administered in the future, to the extent
that this information is expressed in the tense of the verbs and auxiliary verbs used to express events.
10. List/narrative (ln): indicating whether the medication information appears in a
list structure or in narrative running text in the discharge summary.
The medication challenge asked that systems extract the text corresponding to each of the fields
for each of the mentions of the medications that were experienced by the patients.
The values for the set of fields related to a medication mention, if presented within a
two-line window of the mention, were linked in order to create what we defined as an ‘entry’.
If the value of a field for a mention were not specified within a two-line window,
then the value ‘nm’ for ‘not mentioned’ was entered and the offsets were left unspecified.
Since the dataset annotations were crowd-sourced, it contains various violations that are handled
throughout the data loader via means of exception catching or conditional statements. e.g.
annotation: anticoagulation, while in text all words are to be separated by space which
means words at end of sentence will always contain '.' and hence won't be an exact match
i.e. 'anticoagulation' != 'anticoagulation.' from doc_id: 818404
| [
"# Dataset Card for n2c2 2009 Medications",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: NER\n\n\nThe Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records\nfocused on the identification of medications, their dosages, modes (routes) of administration,\nfrequencies, durations, and reasons for administration in discharge summaries.\nThe third i2b2 challenge—that is, the medication challenge—extends information\nextraction to relation extraction; it requires extraction of medications and\nmedication-related information followed by determination of which medication\nbelongs to which medication-related details.\n\nThe medication challenge was designed as an information extraction task.\nThe goal, for each discharge summary, was to extract the following information\non medications experienced by the patient:\n1. Medications (m): including names, brand names, generics, and collective names of prescription substances,\nover the counter medications, and other biological substances for which the patient is the experiencer.\n2. Dosages (do): indicating the amount of a medication used in each administration.\n3. Modes (mo): indicating the route for administering the medication.\n4. Frequencies (f): indicating how often each dose of the medication should be taken.\n5. Durations (du): indicating how long the medication is to be administered.\n6. Reasons (r): stating the medical reason for which the medication is given.\n7. Certainty (c): stating whether the event occurs. Certainty can be expressed by uncertainty words,\ne.g., “suggested”, or via modals, e.g., “should” indicates suggestion.\n8. Event (e): stating on whether the medication is started, stopped, or continued.\n9. Temporal (t): stating whether the medication was administered in the past,\nis being administered currently, or will be administered in the future, to the extent\nthat this information is expressed in the tense of the verbs and auxiliary verbs used to express events.\n10. List/narrative (ln): indicating whether the medication information appears in a\nlist structure or in narrative running text in the discharge summary.\n\nThe medication challenge asked that systems extract the text corresponding to each of the fields\nfor each of the mentions of the medications that were experienced by the patients.\n\nThe values for the set of fields related to a medication mention, if presented within a\ntwo-line window of the mention, were linked in order to create what we defined as an ‘entry’.\nIf the value of a field for a mention were not specified within a two-line window,\nthen the value ‘nm’ for ‘not mentioned’ was entered and the offsets were left unspecified.\n\nSince the dataset annotations were crowd-sourced, it contains various violations that are handled\nthroughout the data loader via means of exception catching or conditional statements. e.g.\nannotation: anticoagulation, while in text all words are to be separated by space which\nmeans words at end of sentence will always contain '.' and hence won't be an exact match\ni.e. 'anticoagulation' != 'anticoagulation.' from doc_id: 818404"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for n2c2 2009 Medications",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: NER\n\n\nThe Third i2b2 Workshop on Natural Language Processing Challenges for Clinical Records\nfocused on the identification of medications, their dosages, modes (routes) of administration,\nfrequencies, durations, and reasons for administration in discharge summaries.\nThe third i2b2 challenge—that is, the medication challenge—extends information\nextraction to relation extraction; it requires extraction of medications and\nmedication-related information followed by determination of which medication\nbelongs to which medication-related details.\n\nThe medication challenge was designed as an information extraction task.\nThe goal, for each discharge summary, was to extract the following information\non medications experienced by the patient:\n1. Medications (m): including names, brand names, generics, and collective names of prescription substances,\nover the counter medications, and other biological substances for which the patient is the experiencer.\n2. Dosages (do): indicating the amount of a medication used in each administration.\n3. Modes (mo): indicating the route for administering the medication.\n4. Frequencies (f): indicating how often each dose of the medication should be taken.\n5. Durations (du): indicating how long the medication is to be administered.\n6. Reasons (r): stating the medical reason for which the medication is given.\n7. Certainty (c): stating whether the event occurs. Certainty can be expressed by uncertainty words,\ne.g., “suggested”, or via modals, e.g., “should” indicates suggestion.\n8. Event (e): stating on whether the medication is started, stopped, or continued.\n9. Temporal (t): stating whether the medication was administered in the past,\nis being administered currently, or will be administered in the future, to the extent\nthat this information is expressed in the tense of the verbs and auxiliary verbs used to express events.\n10. List/narrative (ln): indicating whether the medication information appears in a\nlist structure or in narrative running text in the discharge summary.\n\nThe medication challenge asked that systems extract the text corresponding to each of the fields\nfor each of the mentions of the medications that were experienced by the patients.\n\nThe values for the set of fields related to a medication mention, if presented within a\ntwo-line window of the mention, were linked in order to create what we defined as an ‘entry’.\nIf the value of a field for a mention were not specified within a two-line window,\nthen the value ‘nm’ for ‘not mentioned’ was entered and the offsets were left unspecified.\n\nSince the dataset annotations were crowd-sourced, it contains various violations that are handled\nthroughout the data loader via means of exception catching or conditional statements. e.g.\nannotation: anticoagulation, while in text all words are to be separated by space which\nmeans words at end of sentence will always contain '.' and hence won't be an exact match\ni.e. 'anticoagulation' != 'anticoagulation.' from doc_id: 818404"
] |
7985666731e1bd0dd46387a4bd6e7799970ad432 |
# Dataset Card for n2c2 2010 Concepts, Assertions, and Relations
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER,RE
The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records comprises three tasks:
1) a concept extraction task focused on the extraction of medical concepts from patient reports;
2) an assertion classification task focused on assigning assertion types for medical problem concepts;
3) a relation classification task focused on assigning relation types that hold between medical problems,
tests, and treatments.
i2b2 and the VA provided an annotated reference standard corpus for the three tasks.
Using this reference standard, 22 systems were developed for concept extraction,
21 for assertion classification, and 16 for relation classification.
## Citation Information
```
@article{DBLP:journals/jamia/UzunerSSD11,
author = {
Ozlem Uzuner and
Brett R. South and
Shuying Shen and
Scott L. DuVall
},
title = {2010 i2b2/VA challenge on concepts, assertions, and relations in clinical
text},
journal = {J. Am. Medical Informatics Assoc.},
volume = {18},
number = {5},
pages = {552--556},
year = {2011},
url = {https://doi.org/10.1136/amiajnl-2011-000203},
doi = {10.1136/amiajnl-2011-000203},
timestamp = {Mon, 11 May 2020 23:00:20 +0200},
biburl = {https://dblp.org/rec/journals/jamia/UzunerSSD11.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| bigbio/n2c2_2010 | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:35+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "n2c2 2010 Concepts, Assertions, and Relations", "bigbio_language": ["English"], "bigbio_license_shortname": "DUA", "homepage": "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2022-12-22T15:45:51+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for n2c2 2010 Concepts, Assertions, and Relations
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: NER,RE
The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records comprises three tasks:
1) a concept extraction task focused on the extraction of medical concepts from patient reports;
2) an assertion classification task focused on assigning assertion types for medical problem concepts;
3) a relation classification task focused on assigning relation types that hold between medical problems,
tests, and treatments.
i2b2 and the VA provided an annotated reference standard corpus for the three tasks.
Using this reference standard, 22 systems were developed for concept extraction,
21 for assertion classification, and 16 for relation classification.
| [
"# Dataset Card for n2c2 2010 Concepts, Assertions, and Relations",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER,RE\n\n\nThe i2b2/VA corpus contained de-identified discharge summaries from Beth Israel\nDeaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical\nCenter (UPMC). In addition, UPMC contributed de-identified progress notes to the\ni2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.\n\nThe 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records comprises three tasks:\n1) a concept extraction task focused on the extraction of medical concepts from patient reports;\n2) an assertion classification task focused on assigning assertion types for medical problem concepts;\n3) a relation classification task focused on assigning relation types that hold between medical problems,\ntests, and treatments.\n\ni2b2 and the VA provided an annotated reference standard corpus for the three tasks.\nUsing this reference standard, 22 systems were developed for concept extraction,\n21 for assertion classification, and 16 for relation classification."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for n2c2 2010 Concepts, Assertions, and Relations",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER,RE\n\n\nThe i2b2/VA corpus contained de-identified discharge summaries from Beth Israel\nDeaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical\nCenter (UPMC). In addition, UPMC contributed de-identified progress notes to the\ni2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.\n\nThe 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records comprises three tasks:\n1) a concept extraction task focused on the extraction of medical concepts from patient reports;\n2) an assertion classification task focused on assigning assertion types for medical problem concepts;\n3) a relation classification task focused on assigning relation types that hold between medical problems,\ntests, and treatments.\n\ni2b2 and the VA provided an annotated reference standard corpus for the three tasks.\nUsing this reference standard, 22 systems were developed for concept extraction,\n21 for assertion classification, and 16 for relation classification."
] |
bc3e3f25ee7c6705cbc80417dbe9754c505e0104 |
# Dataset Card for n2c2 2011 Coreference
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** COREF
The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
The i2b2/VA corpus contained five concept categories: problem, person, pronoun,
test, and treatment. Each record in the i2b2/VA corpus was annotated by two
independent annotators for coreference pairs. Then the pairs were post-processed
in order to create coreference chains. These chains were presented to an adjudicator,
who resolved the disagreements between the original annotations, and added or deleted
annotations as necessary. The outputs of the adjudicators were then re-adjudicated, with
particular attention being paid to duplicates and enforcing consistency in the annotations.
## Citation Information
```
@article{uzuner2012evaluating,
author = {
Uzuner, Ozlem and
Bodnari, Andreea and
Shen, Shuying and
Forbush, Tyler and
Pestian, John and
South, Brett R
},
title = "{Evaluating the state of the art in coreference resolution for electronic medical records}",
journal = {Journal of the American Medical Informatics Association},
volume = {19},
number = {5},
pages = {786-791},
year = {2012},
month = {02},
issn = {1067-5027},
doi = {10.1136/amiajnl-2011-000784},
url = {https://doi.org/10.1136/amiajnl-2011-000784},
eprint = {https://academic.oup.com/jamia/article-pdf/19/5/786/17374287/19-5-786.pdf},
}
```
| bigbio/n2c2_2011 | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:38+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "n2c2 2011 Coreference", "bigbio_language": ["English"], "bigbio_license_shortname": "DUA", "homepage": "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["COREFERENCE_RESOLUTION"]} | 2022-12-22T15:45:53+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for n2c2 2011 Coreference
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: COREF
The i2b2/VA corpus contained de-identified discharge summaries from Beth Israel
Deaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical
Center (UPMC). In addition, UPMC contributed de-identified progress notes to the
i2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.
The i2b2/VA corpus contained five concept categories: problem, person, pronoun,
test, and treatment. Each record in the i2b2/VA corpus was annotated by two
independent annotators for coreference pairs. Then the pairs were post-processed
in order to create coreference chains. These chains were presented to an adjudicator,
who resolved the disagreements between the original annotations, and added or deleted
annotations as necessary. The outputs of the adjudicators were then re-adjudicated, with
particular attention being paid to duplicates and enforcing consistency in the annotations.
| [
"# Dataset Card for n2c2 2011 Coreference",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: COREF\n\n\nThe i2b2/VA corpus contained de-identified discharge summaries from Beth Israel\nDeaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical\nCenter (UPMC). In addition, UPMC contributed de-identified progress notes to the\ni2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.\n\nThe i2b2/VA corpus contained five concept categories: problem, person, pronoun,\ntest, and treatment. Each record in the i2b2/VA corpus was annotated by two\nindependent annotators for coreference pairs. Then the pairs were post-processed\nin order to create coreference chains. These chains were presented to an adjudicator,\nwho resolved the disagreements between the original annotations, and added or deleted\nannotations as necessary. The outputs of the adjudicators were then re-adjudicated, with\nparticular attention being paid to duplicates and enforcing consistency in the annotations."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for n2c2 2011 Coreference",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: COREF\n\n\nThe i2b2/VA corpus contained de-identified discharge summaries from Beth Israel\nDeaconess Medical Center, Partners Healthcare, and University of Pittsburgh Medical\nCenter (UPMC). In addition, UPMC contributed de-identified progress notes to the\ni2b2/VA corpus. This dataset contains the records from Beth Israel and Partners.\n\nThe i2b2/VA corpus contained five concept categories: problem, person, pronoun,\ntest, and treatment. Each record in the i2b2/VA corpus was annotated by two\nindependent annotators for coreference pairs. Then the pairs were post-processed\nin order to create coreference chains. These chains were presented to an adjudicator,\nwho resolved the disagreements between the original annotations, and added or deleted\nannotations as necessary. The outputs of the adjudicators were then re-adjudicated, with\nparticular attention being paid to duplicates and enforcing consistency in the annotations."
] |
49e034682b309a64fa5e5102c5c293771cf71e70 |
# Dataset Card for n2c2 2014 De-identification
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER
The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured two tracks.
The first of these was the de-identification track focused on identifying protected health
information (PHI) in longitudinal clinical narratives.
TRACK 1: NER PHI
HIPAA requires that patient medical records have all identifying information removed in order to
protect patient privacy. There are 18 categories of Protected Health Information (PHI) identifiers of the
patient or of relatives, employers, or household members of the patient that must be removed in order
for a file to be considered de-identified.
In order to de-identify the records, each file has PHI marked up. All PHI has an
XML tag indicating its category and type, where applicable. For the purposes of this task,
the 18 HIPAA categories have been grouped into 6 main categories and 25 sub categories
## Citation Information
```
@article{stubbs2015automated,
title = {Automated systems for the de-identification of longitudinal
clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1},
journal = {Journal of Biomedical Informatics},
volume = {58},
pages = {S11-S19},
year = {2015},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2015.06.007},
url = {https://www.sciencedirect.com/science/article/pii/S1532046415001173},
author = {Amber Stubbs and Christopher Kotfila and Özlem Uzuner}
}
```
| bigbio/n2c2_2014_deid | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:42+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "n2c2 2014 De-identification", "bigbio_language": ["English"], "bigbio_license_shortname": "DUA", "homepage": "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:45:57+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for n2c2 2014 De-identification
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: NER
The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured two tracks.
The first of these was the de-identification track focused on identifying protected health
information (PHI) in longitudinal clinical narratives.
TRACK 1: NER PHI
HIPAA requires that patient medical records have all identifying information removed in order to
protect patient privacy. There are 18 categories of Protected Health Information (PHI) identifiers of the
patient or of relatives, employers, or household members of the patient that must be removed in order
for a file to be considered de-identified.
In order to de-identify the records, each file has PHI marked up. All PHI has an
XML tag indicating its category and type, where applicable. For the purposes of this task,
the 18 HIPAA categories have been grouped into 6 main categories and 25 sub categories
| [
"# Dataset Card for n2c2 2014 De-identification",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER\n\n\nThe 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured two tracks.\nThe first of these was the de-identification track focused on identifying protected health\ninformation (PHI) in longitudinal clinical narratives.\n\nTRACK 1: NER PHI\n\nHIPAA requires that patient medical records have all identifying information removed in order to\nprotect patient privacy. There are 18 categories of Protected Health Information (PHI) identifiers of the\npatient or of relatives, employers, or household members of the patient that must be removed in order\nfor a file to be considered de-identified.\nIn order to de-identify the records, each file has PHI marked up. All PHI has an\nXML tag indicating its category and type, where applicable. For the purposes of this task,\nthe 18 HIPAA categories have been grouped into 6 main categories and 25 sub categories"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for n2c2 2014 De-identification",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER\n\n\nThe 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured two tracks.\nThe first of these was the de-identification track focused on identifying protected health\ninformation (PHI) in longitudinal clinical narratives.\n\nTRACK 1: NER PHI\n\nHIPAA requires that patient medical records have all identifying information removed in order to\nprotect patient privacy. There are 18 categories of Protected Health Information (PHI) identifiers of the\npatient or of relatives, employers, or household members of the patient that must be removed in order\nfor a file to be considered de-identified.\nIn order to de-identify the records, each file has PHI marked up. All PHI has an\nXML tag indicating its category and type, where applicable. For the purposes of this task,\nthe 18 HIPAA categories have been grouped into 6 main categories and 25 sub categories"
] |
023597b2861ba80c0077b9d36cc2c3c296d707cb |
# Dataset Card for n2c2 2018 Selection Criteria
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** TXTCLASS
Track 1 of the 2018 National NLP Clinical Challenges shared tasks focused
on identifying which patients in a corpus of longitudinal medical records
meet and do not meet identified selection criteria.
This shared task aimed to determine whether NLP systems could be trained to identify if patients met or did not meet
a set of selection criteria taken from real clinical trials. The selected criteria required measurement detection (
“Any HbA1c value between 6.5 and 9.5%”), inference (“Use of aspirin to prevent myocardial infarction”),
temporal reasoning (“Diagnosis of ketoacidosis in the past year”), and expert judgment to assess (“Major
diabetes-related complication”). For the corpus, we used the dataset of American English, longitudinal clinical
narratives from the 2014 i2b2/UTHealth shared task 4.
The final selected 13 selection criteria are as follows:
1. DRUG-ABUSE: Drug abuse, current or past
2. ALCOHOL-ABUSE: Current alcohol use over weekly recommended limits
3. ENGLISH: Patient must speak English
4. MAKES-DECISIONS: Patient must make their own medical decisions
5. ABDOMINAL: History of intra-abdominal surgery, small or large intestine
resection, or small bowel obstruction.
6. MAJOR-DIABETES: Major diabetes-related complication. For the purposes of
this annotation, we define “major complication” (as opposed to “minor complication”)
as any of the following that are a result of (or strongly correlated with) uncontrolled diabetes:
a. Amputation
b. Kidney damage
c. Skin conditions
d. Retinopathy
e. nephropathy
f. neuropathy
7. ADVANCED-CAD: Advanced cardiovascular disease (CAD).
For the purposes of this annotation, we define “advanced” as having 2 or more of the following:
a. Taking 2 or more medications to treat CAD
b. History of myocardial infarction (MI)
c. Currently experiencing angina
d. Ischemia, past or present
8. MI-6MOS: MI in the past 6 months
9. KETO-1YR: Diagnosis of ketoacidosis in the past year
10. DIETSUPP-2MOS: Taken a dietary supplement (excluding vitamin D) in the past 2 months
11. ASP-FOR-MI: Use of aspirin to prevent MI
12. HBA1C: Any hemoglobin A1c (HbA1c) value between 6.5% and 9.5%
13. CREATININE: Serum creatinine > upper limit of normal
The training consists of 202 patient records with document-level annotations, 10 records
with textual spans indicating annotator’s evidence for their annotations while test set contains 86.
Note:
* The inter-annotator average agreement is 84.9%
* Whereabouts of 10 records with textual spans indicating annotator’s evidence are unknown.
However, author did a simple script based validation to check if any of the tags contained any text
in any of the training set and they do not, which confirms that atleast train and test do not
have any evidence tagged alongside corresponding tags.
## Citation Information
```
@article{DBLP:journals/jamia/StubbsFSHU19,
author = {
Amber Stubbs and
Michele Filannino and
Ergin Soysal and
Samuel Henry and
Ozlem Uzuner
},
title = {Cohort selection for clinical trials: n2c2 2018 shared task track 1},
journal = {J. Am. Medical Informatics Assoc.},
volume = {26},
number = {11},
pages = {1163--1171},
year = {2019},
url = {https://doi.org/10.1093/jamia/ocz163},
doi = {10.1093/jamia/ocz163},
timestamp = {Mon, 15 Jun 2020 16:56:11 +0200},
biburl = {https://dblp.org/rec/journals/jamia/StubbsFSHU19.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| bigbio/n2c2_2018_track1 | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:45+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "n2c2 2018 Selection Criteria", "bigbio_language": ["English"], "bigbio_license_shortname": "DUA", "homepage": "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:45:59+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for n2c2 2018 Selection Criteria
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: TXTCLASS
Track 1 of the 2018 National NLP Clinical Challenges shared tasks focused
on identifying which patients in a corpus of longitudinal medical records
meet and do not meet identified selection criteria.
This shared task aimed to determine whether NLP systems could be trained to identify if patients met or did not meet
a set of selection criteria taken from real clinical trials. The selected criteria required measurement detection (
“Any HbA1c value between 6.5 and 9.5%”), inference (“Use of aspirin to prevent myocardial infarction”),
temporal reasoning (“Diagnosis of ketoacidosis in the past year”), and expert judgment to assess (“Major
diabetes-related complication”). For the corpus, we used the dataset of American English, longitudinal clinical
narratives from the 2014 i2b2/UTHealth shared task 4.
The final selected 13 selection criteria are as follows:
1. DRUG-ABUSE: Drug abuse, current or past
2. ALCOHOL-ABUSE: Current alcohol use over weekly recommended limits
3. ENGLISH: Patient must speak English
4. MAKES-DECISIONS: Patient must make their own medical decisions
5. ABDOMINAL: History of intra-abdominal surgery, small or large intestine
resection, or small bowel obstruction.
6. MAJOR-DIABETES: Major diabetes-related complication. For the purposes of
this annotation, we define “major complication” (as opposed to “minor complication”)
as any of the following that are a result of (or strongly correlated with) uncontrolled diabetes:
a. Amputation
b. Kidney damage
c. Skin conditions
d. Retinopathy
e. nephropathy
f. neuropathy
7. ADVANCED-CAD: Advanced cardiovascular disease (CAD).
For the purposes of this annotation, we define “advanced” as having 2 or more of the following:
a. Taking 2 or more medications to treat CAD
b. History of myocardial infarction (MI)
c. Currently experiencing angina
d. Ischemia, past or present
8. MI-6MOS: MI in the past 6 months
9. KETO-1YR: Diagnosis of ketoacidosis in the past year
10. DIETSUPP-2MOS: Taken a dietary supplement (excluding vitamin D) in the past 2 months
11. ASP-FOR-MI: Use of aspirin to prevent MI
12. HBA1C: Any hemoglobin A1c (HbA1c) value between 6.5% and 9.5%
13. CREATININE: Serum creatinine > upper limit of normal
The training consists of 202 patient records with document-level annotations, 10 records
with textual spans indicating annotator’s evidence for their annotations while test set contains 86.
Note:
* The inter-annotator average agreement is 84.9%
* Whereabouts of 10 records with textual spans indicating annotator’s evidence are unknown.
However, author did a simple script based validation to check if any of the tags contained any text
in any of the training set and they do not, which confirms that atleast train and test do not
have any evidence tagged alongside corresponding tags.
| [
"# Dataset Card for n2c2 2018 Selection Criteria",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nTrack 1 of the 2018 National NLP Clinical Challenges shared tasks focused\non identifying which patients in a corpus of longitudinal medical records\nmeet and do not meet identified selection criteria.\n\nThis shared task aimed to determine whether NLP systems could be trained to identify if patients met or did not meet\na set of selection criteria taken from real clinical trials. The selected criteria required measurement detection (\n“Any HbA1c value between 6.5 and 9.5%”), inference (“Use of aspirin to prevent myocardial infarction”),\ntemporal reasoning (“Diagnosis of ketoacidosis in the past year”), and expert judgment to assess (“Major\ndiabetes-related complication”). For the corpus, we used the dataset of American English, longitudinal clinical\nnarratives from the 2014 i2b2/UTHealth shared task 4.\n\nThe final selected 13 selection criteria are as follows:\n1. DRUG-ABUSE: Drug abuse, current or past\n2. ALCOHOL-ABUSE: Current alcohol use over weekly recommended limits\n3. ENGLISH: Patient must speak English\n4. MAKES-DECISIONS: Patient must make their own medical decisions\n5. ABDOMINAL: History of intra-abdominal surgery, small or large intestine\nresection, or small bowel obstruction.\n6. MAJOR-DIABETES: Major diabetes-related complication. For the purposes of\nthis annotation, we define “major complication” (as opposed to “minor complication”)\nas any of the following that are a result of (or strongly correlated with) uncontrolled diabetes:\n a. Amputation\n b. Kidney damage\n c. Skin conditions\n d. Retinopathy\n e. nephropathy\n f. neuropathy\n7. ADVANCED-CAD: Advanced cardiovascular disease (CAD).\nFor the purposes of this annotation, we define “advanced” as having 2 or more of the following:\n a. Taking 2 or more medications to treat CAD\n b. History of myocardial infarction (MI)\n c. Currently experiencing angina\n d. Ischemia, past or present\n8. MI-6MOS: MI in the past 6 months\n9. KETO-1YR: Diagnosis of ketoacidosis in the past year\n10. DIETSUPP-2MOS: Taken a dietary supplement (excluding vitamin D) in the past 2 months\n11. ASP-FOR-MI: Use of aspirin to prevent MI\n12. HBA1C: Any hemoglobin A1c (HbA1c) value between 6.5% and 9.5%\n13. CREATININE: Serum creatinine > upper limit of normal\n\nThe training consists of 202 patient records with document-level annotations, 10 records\nwith textual spans indicating annotator’s evidence for their annotations while test set contains 86.\n\nNote:\n* The inter-annotator average agreement is 84.9%\n* Whereabouts of 10 records with textual spans indicating annotator’s evidence are unknown.\nHowever, author did a simple script based validation to check if any of the tags contained any text\nin any of the training set and they do not, which confirms that atleast train and test do not\n have any evidence tagged alongside corresponding tags."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for n2c2 2018 Selection Criteria",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nTrack 1 of the 2018 National NLP Clinical Challenges shared tasks focused\non identifying which patients in a corpus of longitudinal medical records\nmeet and do not meet identified selection criteria.\n\nThis shared task aimed to determine whether NLP systems could be trained to identify if patients met or did not meet\na set of selection criteria taken from real clinical trials. The selected criteria required measurement detection (\n“Any HbA1c value between 6.5 and 9.5%”), inference (“Use of aspirin to prevent myocardial infarction”),\ntemporal reasoning (“Diagnosis of ketoacidosis in the past year”), and expert judgment to assess (“Major\ndiabetes-related complication”). For the corpus, we used the dataset of American English, longitudinal clinical\nnarratives from the 2014 i2b2/UTHealth shared task 4.\n\nThe final selected 13 selection criteria are as follows:\n1. DRUG-ABUSE: Drug abuse, current or past\n2. ALCOHOL-ABUSE: Current alcohol use over weekly recommended limits\n3. ENGLISH: Patient must speak English\n4. MAKES-DECISIONS: Patient must make their own medical decisions\n5. ABDOMINAL: History of intra-abdominal surgery, small or large intestine\nresection, or small bowel obstruction.\n6. MAJOR-DIABETES: Major diabetes-related complication. For the purposes of\nthis annotation, we define “major complication” (as opposed to “minor complication”)\nas any of the following that are a result of (or strongly correlated with) uncontrolled diabetes:\n a. Amputation\n b. Kidney damage\n c. Skin conditions\n d. Retinopathy\n e. nephropathy\n f. neuropathy\n7. ADVANCED-CAD: Advanced cardiovascular disease (CAD).\nFor the purposes of this annotation, we define “advanced” as having 2 or more of the following:\n a. Taking 2 or more medications to treat CAD\n b. History of myocardial infarction (MI)\n c. Currently experiencing angina\n d. Ischemia, past or present\n8. MI-6MOS: MI in the past 6 months\n9. KETO-1YR: Diagnosis of ketoacidosis in the past year\n10. DIETSUPP-2MOS: Taken a dietary supplement (excluding vitamin D) in the past 2 months\n11. ASP-FOR-MI: Use of aspirin to prevent MI\n12. HBA1C: Any hemoglobin A1c (HbA1c) value between 6.5% and 9.5%\n13. CREATININE: Serum creatinine > upper limit of normal\n\nThe training consists of 202 patient records with document-level annotations, 10 records\nwith textual spans indicating annotator’s evidence for their annotations while test set contains 86.\n\nNote:\n* The inter-annotator average agreement is 84.9%\n* Whereabouts of 10 records with textual spans indicating annotator’s evidence are unknown.\nHowever, author did a simple script based validation to check if any of the tags contained any text\nin any of the training set and they do not, which confirms that atleast train and test do not\n have any evidence tagged alongside corresponding tags."
] |
d03afd823088e16689cf7e5060f769fc12458681 |
# Dataset Card for n2c2 2018 ADE
## Dataset Description
- **Homepage:** https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER,RE
The National NLP Clinical Challenges (n2c2), organized in 2018, continued the
legacy of i2b2 (Informatics for Biology and the Bedside), adding 2 new tracks and 2
new sets of data to the shared tasks organized since 2006. Track 2 of 2018
n2c2 shared tasks focused on the extraction of medications, with their signature
information, and adverse drug events (ADEs) from clinical narratives.
This track built on our previous medication challenge, but added a special focus on ADEs.
ADEs are injuries resulting from a medical intervention related to a drugs and
can include allergic reactions, drug interactions, overdoses, and medication errors.
Collectively, ADEs are estimated to account for 30% of all hospital adverse
events; however, ADEs are preventable. Identifying potential drug interactions,
overdoses, allergies, and errors at the point of care and alerting the caregivers of
potential ADEs can improve health delivery, reduce the risk of ADEs, and improve health
outcomes.
A step in this direction requires processing narratives of clinical records
that often elaborate on the medications given to a patient, as well as the known
allergies, reactions, and adverse events of the patient. Extraction of this information
from narratives complements the structured medication information that can be
obtained from prescriptions, allowing a more thorough assessment of potential ADEs
before they happen.
The 2018 n2c2 shared task Track 2, hereon referred to as the ADE track,
tackled these natural language processing tasks in 3 different steps,
which we refer to as tasks:
1. Concept Extraction: identification of concepts related to medications,
their signature information, and ADEs
2. Relation Classification: linking the previously mentioned concepts to
their medication by identifying relations on gold standard concepts
3. End-to-End: building end-to-end systems that process raw narrative text
to discover concepts and find relations of those concepts to their medications
Shared tasks provide a venue for head-to-head comparison of systems developed
for the same task and on the same data, allowing researchers to identify the state
of the art in a particular task, learn from it, and build on it.
## Citation Information
```
@article{DBLP:journals/jamia/HenryBFSU20,
author = {
Sam Henry and
Kevin Buchan and
Michele Filannino and
Amber Stubbs and
Ozlem Uzuner
},
title = {2018 n2c2 shared task on adverse drug events and medication extraction
in electronic health records},
journal = {J. Am. Medical Informatics Assoc.},
volume = {27},
number = {1},
pages = {3--12},
year = {2020},
url = {https://doi.org/10.1093/jamia/ocz166},
doi = {10.1093/jamia/ocz166},
timestamp = {Sat, 30 May 2020 19:53:56 +0200},
biburl = {https://dblp.org/rec/journals/jamia/HenryBFSU20.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| bigbio/n2c2_2018_track2 | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:49+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "n2c2 2018 ADE", "bigbio_language": ["English"], "bigbio_license_shortname": "DUA", "homepage": "https://portal.dbmi.hms.harvard.edu/projects/n2c2-nlp/", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2022-12-22T15:46:01+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for n2c2 2018 ADE
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: NER,RE
The National NLP Clinical Challenges (n2c2), organized in 2018, continued the
legacy of i2b2 (Informatics for Biology and the Bedside), adding 2 new tracks and 2
new sets of data to the shared tasks organized since 2006. Track 2 of 2018
n2c2 shared tasks focused on the extraction of medications, with their signature
information, and adverse drug events (ADEs) from clinical narratives.
This track built on our previous medication challenge, but added a special focus on ADEs.
ADEs are injuries resulting from a medical intervention related to a drugs and
can include allergic reactions, drug interactions, overdoses, and medication errors.
Collectively, ADEs are estimated to account for 30% of all hospital adverse
events; however, ADEs are preventable. Identifying potential drug interactions,
overdoses, allergies, and errors at the point of care and alerting the caregivers of
potential ADEs can improve health delivery, reduce the risk of ADEs, and improve health
outcomes.
A step in this direction requires processing narratives of clinical records
that often elaborate on the medications given to a patient, as well as the known
allergies, reactions, and adverse events of the patient. Extraction of this information
from narratives complements the structured medication information that can be
obtained from prescriptions, allowing a more thorough assessment of potential ADEs
before they happen.
The 2018 n2c2 shared task Track 2, hereon referred to as the ADE track,
tackled these natural language processing tasks in 3 different steps,
which we refer to as tasks:
1. Concept Extraction: identification of concepts related to medications,
their signature information, and ADEs
2. Relation Classification: linking the previously mentioned concepts to
their medication by identifying relations on gold standard concepts
3. End-to-End: building end-to-end systems that process raw narrative text
to discover concepts and find relations of those concepts to their medications
Shared tasks provide a venue for head-to-head comparison of systems developed
for the same task and on the same data, allowing researchers to identify the state
of the art in a particular task, learn from it, and build on it.
| [
"# Dataset Card for n2c2 2018 ADE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER,RE\n\n\nThe National NLP Clinical Challenges (n2c2), organized in 2018, continued the\nlegacy of i2b2 (Informatics for Biology and the Bedside), adding 2 new tracks and 2\nnew sets of data to the shared tasks organized since 2006. Track 2 of 2018\nn2c2 shared tasks focused on the extraction of medications, with their signature\ninformation, and adverse drug events (ADEs) from clinical narratives.\nThis track built on our previous medication challenge, but added a special focus on ADEs.\n\nADEs are injuries resulting from a medical intervention related to a drugs and\ncan include allergic reactions, drug interactions, overdoses, and medication errors.\nCollectively, ADEs are estimated to account for 30% of all hospital adverse\nevents; however, ADEs are preventable. Identifying potential drug interactions,\noverdoses, allergies, and errors at the point of care and alerting the caregivers of\npotential ADEs can improve health delivery, reduce the risk of ADEs, and improve health\noutcomes.\n\nA step in this direction requires processing narratives of clinical records\nthat often elaborate on the medications given to a patient, as well as the known\nallergies, reactions, and adverse events of the patient. Extraction of this information\nfrom narratives complements the structured medication information that can be\nobtained from prescriptions, allowing a more thorough assessment of potential ADEs\nbefore they happen.\n\nThe 2018 n2c2 shared task Track 2, hereon referred to as the ADE track,\ntackled these natural language processing tasks in 3 different steps,\nwhich we refer to as tasks:\n1. Concept Extraction: identification of concepts related to medications,\ntheir signature information, and ADEs\n2. Relation Classification: linking the previously mentioned concepts to\ntheir medication by identifying relations on gold standard concepts\n3. End-to-End: building end-to-end systems that process raw narrative text\nto discover concepts and find relations of those concepts to their medications\n\nShared tasks provide a venue for head-to-head comparison of systems developed\nfor the same task and on the same data, allowing researchers to identify the state\nof the art in a particular task, learn from it, and build on it."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for n2c2 2018 ADE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER,RE\n\n\nThe National NLP Clinical Challenges (n2c2), organized in 2018, continued the\nlegacy of i2b2 (Informatics for Biology and the Bedside), adding 2 new tracks and 2\nnew sets of data to the shared tasks organized since 2006. Track 2 of 2018\nn2c2 shared tasks focused on the extraction of medications, with their signature\ninformation, and adverse drug events (ADEs) from clinical narratives.\nThis track built on our previous medication challenge, but added a special focus on ADEs.\n\nADEs are injuries resulting from a medical intervention related to a drugs and\ncan include allergic reactions, drug interactions, overdoses, and medication errors.\nCollectively, ADEs are estimated to account for 30% of all hospital adverse\nevents; however, ADEs are preventable. Identifying potential drug interactions,\noverdoses, allergies, and errors at the point of care and alerting the caregivers of\npotential ADEs can improve health delivery, reduce the risk of ADEs, and improve health\noutcomes.\n\nA step in this direction requires processing narratives of clinical records\nthat often elaborate on the medications given to a patient, as well as the known\nallergies, reactions, and adverse events of the patient. Extraction of this information\nfrom narratives complements the structured medication information that can be\nobtained from prescriptions, allowing a more thorough assessment of potential ADEs\nbefore they happen.\n\nThe 2018 n2c2 shared task Track 2, hereon referred to as the ADE track,\ntackled these natural language processing tasks in 3 different steps,\nwhich we refer to as tasks:\n1. Concept Extraction: identification of concepts related to medications,\ntheir signature information, and ADEs\n2. Relation Classification: linking the previously mentioned concepts to\ntheir medication by identifying relations on gold standard concepts\n3. End-to-End: building end-to-end systems that process raw narrative text\nto discover concepts and find relations of those concepts to their medications\n\nShared tasks provide a venue for head-to-head comparison of systems developed\nfor the same task and on the same data, allowing researchers to identify the state\nof the art in a particular task, learn from it, and build on it."
] |
b96b632b1c1c245b5cb21faaeec8bdac3b67553e |
# Dataset Card for NCBI Disease
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The NCBI disease corpus is fully annotated at the mention and concept level to serve as a research
resource for the biomedical natural language processing community.
## Citation Information
```
@article{Dogan2014NCBIDC,
title = {NCBI disease corpus: A resource for disease name recognition and concept normalization},
author = {Rezarta Islamaj Dogan and Robert Leaman and Zhiyong Lu},
year = 2014,
journal = {Journal of biomedical informatics},
volume = 47,
pages = {1--10}
}
```
| bigbio/ncbi_disease | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-11-13T22:10:53+00:00 | {"language": ["en"], "license": "cc0-1.0", "multilinguality": "monolingual", "pretty_name": "NCBI Disease", "bigbio_language": ["English"], "bigbio_license_shortname": "CC0_1p0", "homepage": "https://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2023-01-14T03:24:56+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for NCBI Disease
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
The NCBI disease corpus is fully annotated at the mention and concept level to serve as a research
resource for the biomedical natural language processing community.
| [
"# Dataset Card for NCBI Disease",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe NCBI disease corpus is fully annotated at the mention and concept level to serve as a research\nresource for the biomedical natural language processing community."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for NCBI Disease",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe NCBI disease corpus is fully annotated at the mention and concept level to serve as a research\nresource for the biomedical natural language processing community."
] |
c03750c25daa63162664ac6e92b6b0ca59bebf6e |
# Dataset Card for NLM-Gene
## Dataset Description
- **Homepage:** https://zenodo.org/record/5089049
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
NLM-Gene consists of 550 PubMed articles, from 156 journals, and contains more than 15 thousand unique gene names, corresponding to more than five thousand gene identifiers (NCBI Gene taxonomy). This corpus contains gene annotation data from 28 organisms. The annotated articles contain on average 29 gene names, and 10 gene identifiers per article. These characteristics demonstrate that this article set is an important benchmark dataset to test the accuracy of gene recognition algorithms both on multi-species and ambiguous data. The NLM-Gene corpus will be invaluable for advancing text-mining techniques for gene identification tasks in biomedical text.
## Citation Information
```
@article{islamaj2021nlm,
title = {
NLM-Gene, a richly annotated gold standard dataset for gene entities that
addresses ambiguity and multi-species gene recognition
},
author = {
Islamaj, Rezarta and Wei, Chih-Hsuan and Cissel, David and Miliaras,
Nicholas and Printseva, Olga and Rodionov, Oleg and Sekiya, Keiko and Ward,
Janice and Lu, Zhiyong
},
year = 2021,
journal = {Journal of Biomedical Informatics},
publisher = {Elsevier},
volume = 118,
pages = 103779
}
```
| bigbio/nlm_gene | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-11-13T22:10:56+00:00 | {"language": ["en"], "license": "cc0-1.0", "multilinguality": "monolingual", "pretty_name": "NLM-Gene", "bigbio_language": ["English"], "bigbio_license_shortname": "CC0_1p0", "homepage": "https://zenodo.org/record/5089049", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2023-03-31T01:10:39+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for NLM-Gene
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
NLM-Gene consists of 550 PubMed articles, from 156 journals, and contains more than 15 thousand unique gene names, corresponding to more than five thousand gene identifiers (NCBI Gene taxonomy). This corpus contains gene annotation data from 28 organisms. The annotated articles contain on average 29 gene names, and 10 gene identifiers per article. These characteristics demonstrate that this article set is an important benchmark dataset to test the accuracy of gene recognition algorithms both on multi-species and ambiguous data. The NLM-Gene corpus will be invaluable for advancing text-mining techniques for gene identification tasks in biomedical text.
| [
"# Dataset Card for NLM-Gene",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nNLM-Gene consists of 550 PubMed articles, from 156 journals, and contains more than 15 thousand unique gene names, corresponding to more than five thousand gene identifiers (NCBI Gene taxonomy). This corpus contains gene annotation data from 28 organisms. The annotated articles contain on average 29 gene names, and 10 gene identifiers per article. These characteristics demonstrate that this article set is an important benchmark dataset to test the accuracy of gene recognition algorithms both on multi-species and ambiguous data. The NLM-Gene corpus will be invaluable for advancing text-mining techniques for gene identification tasks in biomedical text."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for NLM-Gene",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nNLM-Gene consists of 550 PubMed articles, from 156 journals, and contains more than 15 thousand unique gene names, corresponding to more than five thousand gene identifiers (NCBI Gene taxonomy). This corpus contains gene annotation data from 28 organisms. The annotated articles contain on average 29 gene names, and 10 gene identifiers per article. These characteristics demonstrate that this article set is an important benchmark dataset to test the accuracy of gene recognition algorithms both on multi-species and ambiguous data. The NLM-Gene corpus will be invaluable for advancing text-mining techniques for gene identification tasks in biomedical text."
] |
2c6743350412e051208097dbf9cd203dfd881e24 |
# Dataset Card for NLM WSD
## Dataset Description
- **Homepage:** https://lhncbc.nlm.nih.gov/restricted/ii/areas/WSD/index.html
- **Pubmed:** True
- **Public:** False
- **Tasks:** NED
In order to support research investigating the automatic resolution of word sense ambiguity using natural language
processing techniques, we have constructed this test collection of medical text in which the ambiguities were resolved
by hand. Evaluators were asked to examine instances of an ambiguous word and determine the sense intended by selecting
the Metathesaurus concept (if any) that best represents the meaning of that sense. The test collection consists of 50
highly frequent ambiguous UMLS concepts from 1998 MEDLINE. Each of the 50 ambiguous cases has 100 ambiguous instances
randomly selected from the 1998 MEDLINE citations. For a total of 5,000 instances. We had a total of 11 evaluators of
which 8 completed 100% of the 5,000 instances, 1 completed 56%, 1 completed 44%, and the final evaluator completed 12%
of the instances. Evaluations were only used when the evaluators completed all 100 instances for a given ambiguity.
## Citation Information
```
@article{weeber2001developing,
title = "Developing a test collection for biomedical word sense
disambiguation",
author = "Weeber, M and Mork, J G and Aronson, A R",
journal = "Proc AMIA Symp",
pages = "746--750",
year = 2001,
language = "en"
}
```
| bigbio/nlm_wsd | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:10:59+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "NLM WSD", "bigbio_language": ["English"], "bigbio_license_shortname": "UMLS_LICENSE", "homepage": "https://lhncbc.nlm.nih.gov/restricted/ii/areas/WSD/index.html", "bigbio_pubmed": true, "bigbio_public": false, "bigbio_tasks": ["NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:46:06+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for NLM WSD
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: False
- Tasks: NED
In order to support research investigating the automatic resolution of word sense ambiguity using natural language
processing techniques, we have constructed this test collection of medical text in which the ambiguities were resolved
by hand. Evaluators were asked to examine instances of an ambiguous word and determine the sense intended by selecting
the Metathesaurus concept (if any) that best represents the meaning of that sense. The test collection consists of 50
highly frequent ambiguous UMLS concepts from 1998 MEDLINE. Each of the 50 ambiguous cases has 100 ambiguous instances
randomly selected from the 1998 MEDLINE citations. For a total of 5,000 instances. We had a total of 11 evaluators of
which 8 completed 100% of the 5,000 instances, 1 completed 56%, 1 completed 44%, and the final evaluator completed 12%
of the instances. Evaluations were only used when the evaluators completed all 100 instances for a given ambiguity.
| [
"# Dataset Card for NLM WSD",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: NED\n\n\nIn order to support research investigating the automatic resolution of word sense ambiguity using natural language\nprocessing techniques, we have constructed this test collection of medical text in which the ambiguities were resolved\nby hand. Evaluators were asked to examine instances of an ambiguous word and determine the sense intended by selecting\nthe Metathesaurus concept (if any) that best represents the meaning of that sense. The test collection consists of 50\nhighly frequent ambiguous UMLS concepts from 1998 MEDLINE. Each of the 50 ambiguous cases has 100 ambiguous instances\nrandomly selected from the 1998 MEDLINE citations. For a total of 5,000 instances. We had a total of 11 evaluators of\nwhich 8 completed 100% of the 5,000 instances, 1 completed 56%, 1 completed 44%, and the final evaluator completed 12%\nof the instances. Evaluations were only used when the evaluators completed all 100 instances for a given ambiguity."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for NLM WSD",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: NED\n\n\nIn order to support research investigating the automatic resolution of word sense ambiguity using natural language\nprocessing techniques, we have constructed this test collection of medical text in which the ambiguities were resolved\nby hand. Evaluators were asked to examine instances of an ambiguous word and determine the sense intended by selecting\nthe Metathesaurus concept (if any) that best represents the meaning of that sense. The test collection consists of 50\nhighly frequent ambiguous UMLS concepts from 1998 MEDLINE. Each of the 50 ambiguous cases has 100 ambiguous instances\nrandomly selected from the 1998 MEDLINE citations. For a total of 5,000 instances. We had a total of 11 evaluators of\nwhich 8 completed 100% of the 5,000 instances, 1 completed 56%, 1 completed 44%, and the final evaluator completed 12%\nof the instances. Evaluations were only used when the evaluators completed all 100 instances for a given ambiguity."
] |
3ea16ec31a629659cef520c116b17a08db7f3764 |
# Dataset Card for NLM-Chem
## Dataset Description
- **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-2
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,TXTCLASS
NLM-Chem corpus consists of 150 full-text articles from the PubMed Central Open Access dataset,
comprising 67 different chemical journals, aiming to cover a general distribution of usage of chemical
names in the biomedical literature.
Articles were selected so that human annotation was most valuable (meaning that they were rich in bio-entities,
and current state-of-the-art named entity recognition systems disagreed on bio-entity recognition.
## Citation Information
```
@Article{islamaj2021nlm,
title={NLM-Chem, a new resource for chemical entity recognition in PubMed full text literature},
author={Islamaj, Rezarta and Leaman, Robert and Kim, Sun and Kwon, Dongseop and Wei, Chih-Hsuan and Comeau, Donald C and Peng, Yifan and Cissel, David and Coss, Cathleen and Fisher, Carol and others},
journal={Scientific Data},
volume={8},
number={1},
pages={1--12},
year={2021},
publisher={Nature Publishing Group}
}
```
| bigbio/nlmchem | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-11-13T22:11:03+00:00 | {"language": ["en"], "license": "cc0-1.0", "multilinguality": "monolingual", "pretty_name": "NLM-Chem", "bigbio_language": ["English"], "bigbio_license_shortname": "CC0_1p0", "homepage": "https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-2", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION", "TEXT_CLASSIFICATION"]} | 2022-12-22T15:46:07+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for NLM-Chem
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED,TXTCLASS
NLM-Chem corpus consists of 150 full-text articles from the PubMed Central Open Access dataset,
comprising 67 different chemical journals, aiming to cover a general distribution of usage of chemical
names in the biomedical literature.
Articles were selected so that human annotation was most valuable (meaning that they were rich in bio-entities,
and current state-of-the-art named entity recognition systems disagreed on bio-entity recognition.
| [
"# Dataset Card for NLM-Chem",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,TXTCLASS\n\n\nNLM-Chem corpus consists of 150 full-text articles from the PubMed Central Open Access dataset,\ncomprising 67 different chemical journals, aiming to cover a general distribution of usage of chemical\nnames in the biomedical literature.\nArticles were selected so that human annotation was most valuable (meaning that they were rich in bio-entities,\nand current state-of-the-art named entity recognition systems disagreed on bio-entity recognition."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for NLM-Chem",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,TXTCLASS\n\n\nNLM-Chem corpus consists of 150 full-text articles from the PubMed Central Open Access dataset,\ncomprising 67 different chemical journals, aiming to cover a general distribution of usage of chemical\nnames in the biomedical literature.\nArticles were selected so that human annotation was most valuable (meaning that they were rich in bio-entities,\nand current state-of-the-art named entity recognition systems disagreed on bio-entity recognition."
] |
cb4244c51ecfd387150463d1ad2e1c7dbaad83f4 |
# Dataset Card for NTCIR-13 MedWeb
## Dataset Description
- **Homepage:** http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html
- **Pubmed:** False
- **Public:** False
- **Tasks:** TRANSL,TXTCLASS
NTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires
to perform a multi-label classification that labels for eight diseases/symptoms must
be assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n
labels for eight diseases/symptoms. The achievements of this task can almost be
directly applied to a fundamental engine for actual applications.
This task provides pseudo-Twitter messages in a cross-language and multi-label corpus,
covering three languages (Japanese, English, and Chinese), and annotated with eight
labels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache,
fever, runny nose, and cold.
For more information, see:
http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html
As this dataset also provides a parallel corpus of pseudo-tweets for english,
japanese and chinese it can also be used to train translation models between
these three languages.
## Citation Information
```
@article{wakamiya2017overview,
author = {Shoko Wakamiya, Mizuki Morita, Yoshinobu Kano, Tomoko Ohkuma and Eiji Aramaki},
title = {Overview of the NTCIR-13 MedWeb Task},
journal = {Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies (NTCIR-13)},
year = {2017},
url = {
http://research.nii.ac.jp/ntcir/workshop/OnlineProceedings13/pdf/ntcir/01-NTCIR13-OV-MEDWEB-WakamiyaS.pdf
},
}
```
| bigbio/ntcir_13_medweb | [
"multilinguality:multilingual",
"language:en",
"language:zh",
"language:ja",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:11:06+00:00 | {"language": ["en", "zh", "ja"], "license": "cc-by-4.0", "multilinguality": "multilingual", "pretty_name": "NTCIR-13 MedWeb", "bigbio_language": ["English", "Chinese", "Japanese"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "http://research.nii.ac.jp/ntcir/permission/ntcir-13/perm-en-MedWeb.html", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TRANSLATION", "TEXT_CLASSIFICATION"]} | 2022-12-22T15:46:09+00:00 | [] | [
"en",
"zh",
"ja"
] | TAGS
#multilinguality-multilingual #language-English #language-Chinese #language-Japanese #license-cc-by-4.0 #region-us
|
# Dataset Card for NTCIR-13 MedWeb
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: TRANSL,TXTCLASS
NTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires
to perform a multi-label classification that labels for eight diseases/symptoms must
be assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n
labels for eight diseases/symptoms. The achievements of this task can almost be
directly applied to a fundamental engine for actual applications.
This task provides pseudo-Twitter messages in a cross-language and multi-label corpus,
covering three languages (Japanese, English, and Chinese), and annotated with eight
labels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache,
fever, runny nose, and cold.
For more information, see:
URL
As this dataset also provides a parallel corpus of pseudo-tweets for english,
japanese and chinese it can also be used to train translation models between
these three languages.
| [
"# Dataset Card for NTCIR-13 MedWeb",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TRANSL,TXTCLASS\n\n\nNTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires\nto perform a multi-label classification that labels for eight diseases/symptoms must\nbe assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n\nlabels for eight diseases/symptoms. The achievements of this task can almost be\ndirectly applied to a fundamental engine for actual applications.\n\nThis task provides pseudo-Twitter messages in a cross-language and multi-label corpus,\ncovering three languages (Japanese, English, and Chinese), and annotated with eight\nlabels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache,\nfever, runny nose, and cold.\n\nFor more information, see:\nURL\n\nAs this dataset also provides a parallel corpus of pseudo-tweets for english,\njapanese and chinese it can also be used to train translation models between\nthese three languages."
] | [
"TAGS\n#multilinguality-multilingual #language-English #language-Chinese #language-Japanese #license-cc-by-4.0 #region-us \n",
"# Dataset Card for NTCIR-13 MedWeb",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TRANSL,TXTCLASS\n\n\nNTCIR-13 MedWeb (Medical Natural Language Processing for Web Document) task requires\nto perform a multi-label classification that labels for eight diseases/symptoms must\nbe assigned to each tweet. Given pseudo-tweets, the output are Positive:p or Negative:n\nlabels for eight diseases/symptoms. The achievements of this task can almost be\ndirectly applied to a fundamental engine for actual applications.\n\nThis task provides pseudo-Twitter messages in a cross-language and multi-label corpus,\ncovering three languages (Japanese, English, and Chinese), and annotated with eight\nlabels such as influenza, diarrhea/stomachache, hay fever, cough/sore throat, headache,\nfever, runny nose, and cold.\n\nFor more information, see:\nURL\n\nAs this dataset also provides a parallel corpus of pseudo-tweets for english,\njapanese and chinese it can also be used to train translation models between\nthese three languages."
] |
91785db603fcfebc28a3bb9b52df62968528ad8e |
# Dataset Card for OSIRIS
## Dataset Description
- **Homepage:** https://sites.google.com/site/laurafurlongweb/databases-and-tools/corpora/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The OSIRIS corpus is a set of MEDLINE abstracts manually annotated
with human variation mentions. The corpus is distributed under the terms
of the Creative Commons Attribution License
Creative Commons Attribution 3.0 Unported License,
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited (Furlong et al, BMC Bioinformatics 2008, 9:84).
## Citation Information
```
@ARTICLE{Furlong2008,
author = {Laura I Furlong and Holger Dach and Martin Hofmann-Apitius and Ferran Sanz},
title = {OSIRISv1.2: a named entity recognition system for sequence variants
of genes in biomedical literature.},
journal = {BMC Bioinformatics},
year = {2008},
volume = {9},
pages = {84},
doi = {10.1186/1471-2105-9-84},
pii = {1471-2105-9-84},
pmid = {18251998},
timestamp = {2013.01.15},
url = {http://dx.doi.org/10.1186/1471-2105-9-84}
}
```
| bigbio/osiris | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2022-11-13T22:11:10+00:00 | {"language": ["en"], "license": "cc-by-3.0", "multilinguality": "monolingual", "pretty_name": "OSIRIS", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_3p0", "homepage": "https://sites.google.com/site/laurafurlongweb/databases-and-tools/corpora/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:46:10+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-3.0 #region-us
|
# Dataset Card for OSIRIS
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
The OSIRIS corpus is a set of MEDLINE abstracts manually annotated
with human variation mentions. The corpus is distributed under the terms
of the Creative Commons Attribution License
Creative Commons Attribution 3.0 Unported License,
which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited (Furlong et al, BMC Bioinformatics 2008, 9:84).
| [
"# Dataset Card for OSIRIS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe OSIRIS corpus is a set of MEDLINE abstracts manually annotated\nwith human variation mentions. The corpus is distributed under the terms\nof the Creative Commons Attribution License\nCreative Commons Attribution 3.0 Unported License,\nwhich permits unrestricted use, distribution, and reproduction in any medium,\nprovided the original work is properly cited (Furlong et al, BMC Bioinformatics 2008, 9:84)."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-3.0 #region-us \n",
"# Dataset Card for OSIRIS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe OSIRIS corpus is a set of MEDLINE abstracts manually annotated\nwith human variation mentions. The corpus is distributed under the terms\nof the Creative Commons Attribution License\nCreative Commons Attribution 3.0 Unported License,\nwhich permits unrestricted use, distribution, and reproduction in any medium,\nprovided the original work is properly cited (Furlong et al, BMC Bioinformatics 2008, 9:84)."
] |
02d51cfe956add9301e00b2fe894d21ad961aaec |
# Dataset Card for ParaMed
## Dataset Description
- **Homepage:** https://github.com/boxiangliu/ParaMed
- **Pubmed:** False
- **Public:** True
- **Tasks:** TRANSL
NEJM is a Chinese-English parallel corpus crawled from the New England Journal of Medicine website.
English articles are distributed through https://www.nejm.org/ and Chinese articles are distributed through
http://nejmqianyan.cn/. The corpus contains all article pairs (around 2000 pairs) since 2011.
## Citation Information
```
@article{liu2021paramed,
author = {Liu, Boxiang and Huang, Liang},
title = {ParaMed: a parallel corpus for English–Chinese translation in the biomedical domain},
journal = {BMC Medical Informatics and Decision Making},
volume = {21},
year = {2021},
url = {https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01621-8},
doi = {10.1186/s12911-021-01621-8}
}
```
| bigbio/paramed | [
"multilinguality:multilingual",
"language:en",
"language:zh",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:11:13+00:00 | {"language": ["en", "zh"], "license": "cc-by-4.0", "multilinguality": "multilingual", "pretty_name": "ParaMed", "bigbio_language": ["English", "Chinese"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://github.com/boxiangliu/ParaMed", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TRANSLATION"]} | 2022-12-22T15:46:11+00:00 | [] | [
"en",
"zh"
] | TAGS
#multilinguality-multilingual #language-English #language-Chinese #license-cc-by-4.0 #region-us
|
# Dataset Card for ParaMed
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TRANSL
NEJM is a Chinese-English parallel corpus crawled from the New England Journal of Medicine website.
English articles are distributed through URL and Chinese articles are distributed through
URL The corpus contains all article pairs (around 2000 pairs) since 2011.
| [
"# Dataset Card for ParaMed",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TRANSL\n\n\nNEJM is a Chinese-English parallel corpus crawled from the New England Journal of Medicine website. \nEnglish articles are distributed through URL and Chinese articles are distributed through \nURL The corpus contains all article pairs (around 2000 pairs) since 2011."
] | [
"TAGS\n#multilinguality-multilingual #language-English #language-Chinese #license-cc-by-4.0 #region-us \n",
"# Dataset Card for ParaMed",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TRANSL\n\n\nNEJM is a Chinese-English parallel corpus crawled from the New England Journal of Medicine website. \nEnglish articles are distributed through URL and Chinese articles are distributed through \nURL The corpus contains all article pairs (around 2000 pairs) since 2011."
] |
a4363872c0cb74f0674761536e3d7e87db529d55 |
# Dataset Card for PCR
## Dataset Description
- **Homepage:** http://210.107.182.73/plantchemcorpus.htm
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,EE
A corpus for plant / herb and chemical entities and for the relationships between them. The corpus contains 2218 plant and chemical entities and 600 plant-chemical relationships which are drawn from 1109 sentences in 245 PubMed abstracts.
## Citation Information
```
@article{choi2016corpus,
title = {A corpus for plant-chemical relationships in the biomedical domain},
author = {
Choi, Wonjun and Kim, Baeksoo and Cho, Hyejin and Lee, Doheon and Lee,
Hyunju
},
year = 2016,
journal = {BMC bioinformatics},
publisher = {Springer},
volume = 17,
number = 1,
pages = {1--15}
}
```
| bigbio/pcr | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:11:17+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "PCR", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "http://210.107.182.73/plantchemcorpus.htm", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "EVENT_EXTRACTION"]} | 2022-12-22T15:46:13+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for PCR
## Dataset Description
- Homepage: http://210.107.182.73/URL
- Pubmed: True
- Public: True
- Tasks: NER,EE
A corpus for plant / herb and chemical entities and for the relationships between them. The corpus contains 2218 plant and chemical entities and 600 plant-chemical relationships which are drawn from 1109 sentences in 245 PubMed abstracts.
| [
"# Dataset Card for PCR",
"## Dataset Description\n\n- Homepage: http://210.107.182.73/URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,EE\n\n\n\nA corpus for plant / herb and chemical entities and for the relationships between them. The corpus contains 2218 plant and chemical entities and 600 plant-chemical relationships which are drawn from 1109 sentences in 245 PubMed abstracts."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for PCR",
"## Dataset Description\n\n- Homepage: http://210.107.182.73/URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,EE\n\n\n\nA corpus for plant / herb and chemical entities and for the relationships between them. The corpus contains 2218 plant and chemical entities and 600 plant-chemical relationships which are drawn from 1109 sentences in 245 PubMed abstracts."
] |
c06541eebe286b1f4dd4b967be7ab5598857757c |
# Dataset Card for PDR
## Dataset Description
- **Homepage:** http://gcancer.org/pdr/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,EE,COREF
The corpus of plant-disease relation consists of plants and diseases and their relation to PubMed abstract.
The corpus consists of about 2400 plant and disease entities and 300 annotated relations from 179 abstracts.
## Citation Information
```
@article{kim2019corpus,
title={A corpus of plant--disease relations in the biomedical domain},
author={Kim, Baeksoo and Choi, Wonjun and Lee, Hyunju},
journal={PLoS One},
volume={14},
number={8},
pages={e0221582},
year={2019},
publisher={Public Library of Science San Francisco, CA USA}
}
```
| bigbio/pdr | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:11:20+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "PDR", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "http://gcancer.org/pdr/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "EVENT_EXTRACTION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:46:14+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for PDR
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,EE,COREF
The corpus of plant-disease relation consists of plants and diseases and their relation to PubMed abstract.
The corpus consists of about 2400 plant and disease entities and 300 annotated relations from 179 abstracts.
| [
"# Dataset Card for PDR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,EE,COREF\n\n\n\nThe corpus of plant-disease relation consists of plants and diseases and their relation to PubMed abstract.\nThe corpus consists of about 2400 plant and disease entities and 300 annotated relations from 179 abstracts."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for PDR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,EE,COREF\n\n\n\nThe corpus of plant-disease relation consists of plants and diseases and their relation to PubMed abstract.\nThe corpus consists of about 2400 plant and disease entities and 300 annotated relations from 179 abstracts."
] |
bcfd7ed4cefb5cc157e2495d558ca7f01920963c |
# Dataset Card for PharmaCoNER
## Dataset Description
- **Homepage:** https://temu.bsc.es/pharmaconer/index.php/datasets/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,TXTCLASS
### Subtrack 1
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to [email protected]
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
### Subtrack 2
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to [email protected]
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
### Full Task
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit https://temu.bsc.es/pharmaconer/ or send an email to [email protected]
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
## Citation Information
```
@inproceedings{gonzalez2019pharmaconer,
title = "PharmaCoNER: Pharmacological Substances, Compounds and proteins Named Entity Recognition track",
author = "Gonzalez-Agirre, Aitor and
Marimon, Montserrat and
Intxaurrondo, Ander and
Rabal, Obdulia and
Villegas, Marta and
Krallinger, Martin",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5701",
doi = "10.18653/v1/D19-5701",
pages = "1--10",
}
```
| bigbio/pharmaconer | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:11:24+00:00 | {"language": ["es"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "PharmaCoNER", "bigbio_language": ["Spanish"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://temu.bsc.es/pharmaconer/index.php/datasets/", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "TEXT_CLASSIFICATION"]} | 2022-12-22T15:46:15+00:00 | [] | [
"es"
] | TAGS
#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us
|
# Dataset Card for PharmaCoNER
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: NER,TXTCLASS
### Subtrack 1
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit URL or send an email to encargo-pln-life@URL
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
### Subtrack 2
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit URL or send an email to encargo-pln-life@URL
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
### Full Task
PharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track
This dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.
It is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).
The annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.
The PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.
For further information, please visit URL or send an email to encargo-pln-life@URL
SUBTRACK 1: NER offset and entity type classification
The first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.
SUBTRACK 2: CONCEPT INDEXING
In the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.
| [
"# Dataset Card for PharmaCoNER",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,TXTCLASS",
"### Subtrack 1\n\nPharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track\n\nThis dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.\n\nIt is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).\n\nThe annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.\n\nThe PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.\n\nFor further information, please visit URL or send an email to encargo-pln-life@URL\n\n\nSUBTRACK 1: NER offset and entity type classification\n\nThe first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.",
"### Subtrack 2\n\nPharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track\n\nThis dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.\n\nIt is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).\n\nThe annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.\n\nThe PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.\n\nFor further information, please visit URL or send an email to encargo-pln-life@URL\n\n\nSUBTRACK 2: CONCEPT INDEXING\n\nIn the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.",
"### Full Task\n\nPharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track\n\nThis dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.\n\nIt is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).\n\nThe annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.\n\nThe PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.\n\nFor further information, please visit URL or send an email to encargo-pln-life@URL\n\n\nSUBTRACK 1: NER offset and entity type classification\n\nThe first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.\n\n\nSUBTRACK 2: CONCEPT INDEXING\n\nIn the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances."
] | [
"TAGS\n#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us \n",
"# Dataset Card for PharmaCoNER",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,TXTCLASS",
"### Subtrack 1\n\nPharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track\n\nThis dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.\n\nIt is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).\n\nThe annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.\n\nThe PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.\n\nFor further information, please visit URL or send an email to encargo-pln-life@URL\n\n\nSUBTRACK 1: NER offset and entity type classification\n\nThe first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.",
"### Subtrack 2\n\nPharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track\n\nThis dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.\n\nIt is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).\n\nThe annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.\n\nThe PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.\n\nFor further information, please visit URL or send an email to encargo-pln-life@URL\n\n\nSUBTRACK 2: CONCEPT INDEXING\n\nIn the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances.",
"### Full Task\n\nPharmaCoNER: Pharmacological Substances, Compounds and Proteins Named Entity Recognition track\n\nThis dataset is designed for the PharmaCoNER task, sponsored by Plan de Impulso de las Tecnologías del Lenguaje.\n\nIt is a manually classified collection of clinical case studies derived from the Spanish Clinical Case Corpus (SPACCC), an open access electronic library that gathers Spanish medical publications from SciELO (Scientific Electronic Library Online).\n\nThe annotation of the entire set of entity mentions was carried out by medicinal chemistry experts and it includes the following 4 entity types: NORMALIZABLES, NO_NORMALIZABLES, PROTEINAS and UNCLEAR.\n\nThe PharmaCoNER corpus contains a total of 396,988 words and 1,000 clinical cases that have been randomly sampled into 3 subsets. The training set contains 500 clinical cases, while the development and test sets contain 250 clinical cases each.\n\nFor further information, please visit URL or send an email to encargo-pln-life@URL\n\n\nSUBTRACK 1: NER offset and entity type classification\n\nThe first subtrack consists in the classical entity-based or instanced-based evaluation that requires that system outputs match exactly the beginning and end locations of each entity tag, as well as match the entity annotation type of the gold standard annotations.\n\n\nSUBTRACK 2: CONCEPT INDEXING\n\nIn the second subtask, a list of unique SNOMED concept identifiers have to be generated for each document. The predictions are compared to the manually annotated concept ids corresponding to chemical compounds and pharmacological substances."
] |
5490315822791dbbda39931348a2d16a7c360c60 |
# Dataset Card for PICO Annotation
## Dataset Description
- **Homepage:** https://github.com/Markus-Zlabinger/pico-annotation
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
This dataset contains annotations for Participants, Interventions, and Outcomes (referred to as PICO task).
For 423 sentences, annotations collected by 3 medical experts are available.
To get the final annotations, we perform the majority voting.
## Citation Information
```
@inproceedings{zlabinger-etal-2020-effective,
title = "Effective Crowd-Annotation of Participants, Interventions, and Outcomes in the Text of Clinical Trial Reports",
author = {Zlabinger, Markus and
Sabou, Marta and
Hofst{"a}tter, Sebastian and
Hanbury, Allan},
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.274",
doi = "10.18653/v1/2020.findings-emnlp.274",
pages = "3064--3074",
}
```
| bigbio/pico_extraction | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:11:27+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "PICO Annotation", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/Markus-Zlabinger/pico-annotation", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:46:16+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for PICO Annotation
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
This dataset contains annotations for Participants, Interventions, and Outcomes (referred to as PICO task).
For 423 sentences, annotations collected by 3 medical experts are available.
To get the final annotations, we perform the majority voting.
| [
"# Dataset Card for PICO Annotation",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThis dataset contains annotations for Participants, Interventions, and Outcomes (referred to as PICO task).\nFor 423 sentences, annotations collected by 3 medical experts are available.\nTo get the final annotations, we perform the majority voting."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for PICO Annotation",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThis dataset contains annotations for Participants, Interventions, and Outcomes (referred to as PICO task).\nFor 423 sentences, annotations collected by 3 medical experts are available.\nTo get the final annotations, we perform the majority voting."
] |
24e866707cd9ba5dbfeb0355be8cb9a91bc85190 |
# Dataset Card for PMC-Patients
## Dataset Description
- **Homepage:** https://github.com/zhao-zy15/PMC-Patients
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
This dataset is used for calculating the similarity between two patient descriptions.
## Citation Information
```
@misc{zhao2022pmcpatients,
title={PMC-Patients: A Large-scale Dataset of Patient Notes and Relations Extracted from Case
Reports in PubMed Central},
author={Zhengyun Zhao and Qiao Jin and Sheng Yu},
year={2022},
eprint={2202.13876},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| bigbio/pmc_patients | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2202.13876",
"region:us"
] | 2022-11-13T22:11:31+00:00 | {"language": ["en"], "license": "cc-by-nc-sa-4.0", "multilinguality": "monolingual", "pretty_name": "PMC-Patients", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_NC_SA_4p0", "homepage": "https://github.com/zhao-zy15/PMC-Patients", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]} | 2022-12-22T15:46:17+00:00 | [
"2202.13876"
] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-nc-sa-4.0 #arxiv-2202.13876 #region-us
|
# Dataset Card for PMC-Patients
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: STS
This dataset is used for calculating the similarity between two patient descriptions.
| [
"# Dataset Card for PMC-Patients",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: STS\n\n\nThis dataset is used for calculating the similarity between two patient descriptions."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-nc-sa-4.0 #arxiv-2202.13876 #region-us \n",
"# Dataset Card for PMC-Patients",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: STS\n\n\nThis dataset is used for calculating the similarity between two patient descriptions."
] |
fff51281b7936536de98ff763003ab8b7f700f7a |
# Dataset Card for ProGene
## Dataset Description
- **Homepage:** https://zenodo.org/record/3698568#.YlVHqdNBxeg
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The Protein/Gene corpus was developed at the JULIE Lab Jena under supervision of Prof. Udo Hahn.
The executing scientist was Dr. Joachim Wermter.
The main annotator was Dr. Rico Pusch who is an expert in biology.
The corpus was developed in the context of the StemNet project (http://www.stemnet.de/).
## Citation Information
```
@inproceedings{faessler-etal-2020-progene,
title = "{P}ro{G}ene - A Large-scale, High-Quality Protein-Gene Annotated Benchmark Corpus",
author = "Faessler, Erik and
Modersohn, Luise and
Lohr, Christina and
Hahn, Udo",
booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference",
month = may,
year = "2020",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2020.lrec-1.564",
pages = "4585--4596",
abstract = "Genes and proteins constitute the fundamental entities of molecular genetics. We here introduce ProGene (formerly called FSU-PRGE), a corpus that reflects our efforts to cope with this important class of named entities within the framework of a long-lasting large-scale annotation campaign at the Jena University Language {\&} Information Engineering (JULIE) Lab. We assembled the entire corpus from 11 subcorpora covering various biological domains to achieve an overall subdomain-independent corpus. It consists of 3,308 MEDLINE abstracts with over 36k sentences and more than 960k tokens annotated with nearly 60k named entity mentions. Two annotators strove for carefully assigning entity mentions to classes of genes/proteins as well as families/groups, complexes, variants and enumerations of those where genes and proteins are represented by a single class. The main purpose of the corpus is to provide a large body of consistent and reliable annotations for supervised training and evaluation of machine learning algorithms in this relevant domain. Furthermore, we provide an evaluation of two state-of-the-art baseline systems {---} BioBert and flair {---} on the ProGene corpus. We make the evaluation datasets and the trained models available to encourage comparable evaluations of new methods in the future.",
language = "English",
ISBN = "979-10-95546-34-4",
}
```
| bigbio/progene | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:11:35+00:00 | {"language": ["en"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "ProGene", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://zenodo.org/record/3698568#.YlVHqdNBxeg", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:46:19+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for ProGene
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
The Protein/Gene corpus was developed at the JULIE Lab Jena under supervision of Prof. Udo Hahn.
The executing scientist was Dr. Joachim Wermter.
The main annotator was Dr. Rico Pusch who is an expert in biology.
The corpus was developed in the context of the StemNet project (URL
| [
"# Dataset Card for ProGene",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe Protein/Gene corpus was developed at the JULIE Lab Jena under supervision of Prof. Udo Hahn.\nThe executing scientist was Dr. Joachim Wermter.\nThe main annotator was Dr. Rico Pusch who is an expert in biology.\nThe corpus was developed in the context of the StemNet project (URL"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for ProGene",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe Protein/Gene corpus was developed at the JULIE Lab Jena under supervision of Prof. Udo Hahn.\nThe executing scientist was Dr. Joachim Wermter.\nThe main annotator was Dr. Rico Pusch who is an expert in biology.\nThe corpus was developed in the context of the StemNet project (URL"
] |
6dad2a16b2aa5d83677440fa8b9f0c9764a58a6d |
# Dataset Card for PsyTAR
## Dataset Description
- **Homepage:** https://www.askapatient.com/research/pharmacovigilance/corpus-ades-psychiatric-medications.asp
- **Pubmed:** False
- **Public:** False
- **Tasks:** NER,TXTCLASS
The "Psychiatric Treatment Adverse Reactions" (PsyTAR) dataset contains 891 drugs
reviews posted by patients on "askapatient.com", about the effectiveness and adverse
drug events associated with Zoloft, Lexapro, Cymbalta, and Effexor XR.
This dataset can be used for (multi-label) sentence classification of Adverse Drug
Reaction (ADR), Withdrawal Symptoms (WDs), Sign/Symptoms/Illness (SSIs), Drug
Indications (DIs), Drug Effectiveness (EF), Drug Infectiveness (INF) and Others, as well
as for recognition of 5 different types of named entity (in the categories ADRs, WDs,
SSIs and DIs)
## Citation Information
```
@article{Zolnoori2019,
author = {Maryam Zolnoori and
Kin Wah Fung and
Timothy B. Patrick and
Paul Fontelo and
Hadi Kharrazi and
Anthony Faiola and
Yi Shuan Shirley Wu and
Christina E. Eldredge and
Jake Luo and
Mike Conway and
Jiaxi Zhu and
Soo Kyung Park and
Kelly Xu and
Hamideh Moayyed and
Somaieh Goudarzvand},
title = {A systematic approach for developing a corpus of patient reported adverse drug events: A case study for {SSRI} and {SNRI} medications},
journal = {Journal of Biomedical Informatics},
volume = {90},
year = {2019},
url = {https://doi.org/10.1016/j.jbi.2018.12.005},
doi = {10.1016/j.jbi.2018.12.005},
}
```
| bigbio/psytar | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:11:38+00:00 | {"language": ["en"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "PsyTAR", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://www.askapatient.com/research/pharmacovigilance/corpus-ades-psychiatric-medications.asp", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "TEXT_CLASSIFICATION"]} | 2022-12-22T15:46:20+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for PsyTAR
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: NER,TXTCLASS
The "Psychiatric Treatment Adverse Reactions" (PsyTAR) dataset contains 891 drugs
reviews posted by patients on "URL", about the effectiveness and adverse
drug events associated with Zoloft, Lexapro, Cymbalta, and Effexor XR.
This dataset can be used for (multi-label) sentence classification of Adverse Drug
Reaction (ADR), Withdrawal Symptoms (WDs), Sign/Symptoms/Illness (SSIs), Drug
Indications (DIs), Drug Effectiveness (EF), Drug Infectiveness (INF) and Others, as well
as for recognition of 5 different types of named entity (in the categories ADRs, WDs,
SSIs and DIs)
| [
"# Dataset Card for PsyTAR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER,TXTCLASS\n\n\nThe \"Psychiatric Treatment Adverse Reactions\" (PsyTAR) dataset contains 891 drugs\nreviews posted by patients on \"URL\", about the effectiveness and adverse\ndrug events associated with Zoloft, Lexapro, Cymbalta, and Effexor XR.\n\nThis dataset can be used for (multi-label) sentence classification of Adverse Drug\nReaction (ADR), Withdrawal Symptoms (WDs), Sign/Symptoms/Illness (SSIs), Drug\nIndications (DIs), Drug Effectiveness (EF), Drug Infectiveness (INF) and Others, as well\nas for recognition of 5 different types of named entity (in the categories ADRs, WDs,\nSSIs and DIs)"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for PsyTAR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: NER,TXTCLASS\n\n\nThe \"Psychiatric Treatment Adverse Reactions\" (PsyTAR) dataset contains 891 drugs\nreviews posted by patients on \"URL\", about the effectiveness and adverse\ndrug events associated with Zoloft, Lexapro, Cymbalta, and Effexor XR.\n\nThis dataset can be used for (multi-label) sentence classification of Adverse Drug\nReaction (ADR), Withdrawal Symptoms (WDs), Sign/Symptoms/Illness (SSIs), Drug\nIndications (DIs), Drug Effectiveness (EF), Drug Infectiveness (INF) and Others, as well\nas for recognition of 5 different types of named entity (in the categories ADRs, WDs,\nSSIs and DIs)"
] |
5da57e875c04da612ea91aac26429be382068203 |
# Dataset Card for PUBHEALTH
## Dataset Description
- **Homepage:** https://github.com/neemakot/Health-Fact-Checking/tree/master/data
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
A dataset of 11,832 claims for fact- checking, which are related a range of health topics
including biomedical subjects (e.g., infectious diseases, stem cell research), government healthcare policy
(e.g., abortion, mental health, women’s health), and other public health-related stories
## Citation Information
```
@article{kotonya2020explainable,
title={Explainable automated fact-checking for public health claims},
author={Kotonya, Neema and Toni, Francesca},
journal={arXiv preprint arXiv:2010.09926},
year={2020}
}
```
| bigbio/pubhealth | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
] | 2022-11-13T22:11:42+00:00 | {"language": ["en"], "license": "mit", "multilinguality": "monolingual", "pretty_name": "PUBHEALTH", "bigbio_language": ["English"], "bigbio_license_shortname": "MIT", "homepage": "https://github.com/neemakot/Health-Fact-Checking/tree/master/data", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:46:21+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-mit #region-us
|
# Dataset Card for PUBHEALTH
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TXTCLASS
A dataset of 11,832 claims for fact- checking, which are related a range of health topics
including biomedical subjects (e.g., infectious diseases, stem cell research), government healthcare policy
(e.g., abortion, mental health, women’s health), and other public health-related stories
| [
"# Dataset Card for PUBHEALTH",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\nA dataset of 11,832 claims for fact- checking, which are related a range of health topics\nincluding biomedical subjects (e.g., infectious diseases, stem cell research), government healthcare policy\n(e.g., abortion, mental health, women’s health), and other public health-related stories"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-mit #region-us \n",
"# Dataset Card for PUBHEALTH",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\nA dataset of 11,832 claims for fact- checking, which are related a range of health topics\nincluding biomedical subjects (e.g., infectious diseases, stem cell research), government healthcare policy\n(e.g., abortion, mental health, women’s health), and other public health-related stories"
] |
21aa9affd1564464684c73b8476aa3f9e0ccbeb7 |
# Dataset Card for PubMedQA
## Dataset Description
- **Homepage:** https://github.com/pubmedqa/pubmedqa
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.
PubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).
Each PubMedQA instance is composed of:
(1) a question which is either an existing research article title or derived from one,
(2) a context which is the corresponding PubMed abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and
(4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts,
especially their quantitative contents, is required to answer the questions.
PubMedQA datasets comprise of 3 different subsets:
(1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.
(2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.
(3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles.
## Citation Information
```
@inproceedings{jin2019pubmedqa,
title={PubMedQA: A Dataset for Biomedical Research Question Answering},
author={Jin, Qiao and Dhingra, Bhuwan and Liu, Zhengping and Cohen, William and Lu, Xinghua},
booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
pages={2567--2577},
year={2019}
}
```
| bigbio/pubmed_qa | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
] | 2022-11-13T22:11:45+00:00 | {"language": ["en"], "license": "mit", "multilinguality": "monolingual", "pretty_name": "PubMedQA", "bigbio_language": ["English"], "bigbio_license_shortname": "MIT", "homepage": "https://github.com/pubmedqa/pubmedqa", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["QUESTION_ANSWERING"]} | 2022-12-22T15:46:24+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-mit #region-us
|
# Dataset Card for PubMedQA
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: QA
PubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.
The task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.
PubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).
Each PubMedQA instance is composed of:
(1) a question which is either an existing research article title or derived from one,
(2) a context which is the corresponding PubMed abstract without its conclusion,
(3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and
(4) a yes/no/maybe answer which summarizes the conclusion.
PubMedQA is the first QA dataset where reasoning over biomedical research texts,
especially their quantitative contents, is required to answer the questions.
PubMedQA datasets comprise of 3 different subsets:
(1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.
(2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.
(3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles.
| [
"# Dataset Card for PubMedQA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: QA\n\n\nPubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.\nThe task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.\nPubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).\n\nEach PubMedQA instance is composed of:\n (1) a question which is either an existing research article title or derived from one,\n (2) a context which is the corresponding PubMed abstract without its conclusion,\n (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and\n (4) a yes/no/maybe answer which summarizes the conclusion.\n\nPubMedQA is the first QA dataset where reasoning over biomedical research texts,\nespecially their quantitative contents, is required to answer the questions.\n\nPubMedQA datasets comprise of 3 different subsets:\n (1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.\n (2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.\n (3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-mit #region-us \n",
"# Dataset Card for PubMedQA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: QA\n\n\nPubMedQA is a novel biomedical question answering (QA) dataset collected from PubMed abstracts.\nThe task of PubMedQA is to answer research biomedical questions with yes/no/maybe using the corresponding abstracts.\nPubMedQA has 1k expert-annotated (PQA-L), 61.2k unlabeled (PQA-U) and 211.3k artificially generated QA instances (PQA-A).\n\nEach PubMedQA instance is composed of:\n (1) a question which is either an existing research article title or derived from one,\n (2) a context which is the corresponding PubMed abstract without its conclusion,\n (3) a long answer, which is the conclusion of the abstract and, presumably, answers the research question, and\n (4) a yes/no/maybe answer which summarizes the conclusion.\n\nPubMedQA is the first QA dataset where reasoning over biomedical research texts,\nespecially their quantitative contents, is required to answer the questions.\n\nPubMedQA datasets comprise of 3 different subsets:\n (1) PubMedQA Labeled (PQA-L): A labeled PubMedQA subset comprises of 1k manually annotated yes/no/maybe QA data collected from PubMed articles.\n (2) PubMedQA Artificial (PQA-A): An artificially labelled PubMedQA subset comprises of 211.3k PubMed articles with automatically generated questions from the statement titles and yes/no answer labels generated using a simple heuristic.\n (3) PubMedQA Unlabeled (PQA-U): An unlabeled PubMedQA subset comprises of 61.2k context-question pairs data collected from PubMed articles."
] |
b05b96997452726ffecd0405ee33f92237f49845 |
# Dataset Card for PubTator Central
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/pubtator/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
PubTator Central (PTC, https://www.ncbi.nlm.nih.gov/research/pubtator/) is a web service for
exploring and retrieving bioconcept annotations in full text biomedical articles. PTC provides
automated annotations from state-of-the-art text mining systems for genes/proteins, genetic
variants, diseases, chemicals, species and cell lines, all available for immediate download. PTC
annotates PubMed (30 million abstracts), the PMC Open Access Subset and the Author Manuscript
Collection (3 million full text articles). Updated entity identification methods and a
disambiguation module based on cutting-edge deep learning techniques provide increased accuracy.
## Citation Information
```
@article{10.1093/nar/gkz389,
title = {{PubTator central: automated concept annotation for biomedical full text articles}},
author = {Wei, Chih-Hsuan and Allot, Alexis and Leaman, Robert and Lu, Zhiyong},
year = 2019,
month = {05},
journal = {Nucleic Acids Research},
volume = 47,
number = {W1},
pages = {W587-W593},
doi = {10.1093/nar/gkz389},
issn = {0305-1048},
url = {https://doi.org/10.1093/nar/gkz389},
eprint = {https://academic.oup.com/nar/article-pdf/47/W1/W587/28880193/gkz389.pdf}
}
```
| bigbio/pubtator_central | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:11:49+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "PubTator Central", "bigbio_language": ["English"], "bigbio_license_shortname": "NCBI_LICENSE", "homepage": "https://www.ncbi.nlm.nih.gov/research/pubtator/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:46:26+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for PubTator Central
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
PubTator Central (PTC, URL is a web service for
exploring and retrieving bioconcept annotations in full text biomedical articles. PTC provides
automated annotations from state-of-the-art text mining systems for genes/proteins, genetic
variants, diseases, chemicals, species and cell lines, all available for immediate download. PTC
annotates PubMed (30 million abstracts), the PMC Open Access Subset and the Author Manuscript
Collection (3 million full text articles). Updated entity identification methods and a
disambiguation module based on cutting-edge deep learning techniques provide increased accuracy.
| [
"# Dataset Card for PubTator Central",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nPubTator Central (PTC, URL is a web service for\nexploring and retrieving bioconcept annotations in full text biomedical articles. PTC provides\nautomated annotations from state-of-the-art text mining systems for genes/proteins, genetic\nvariants, diseases, chemicals, species and cell lines, all available for immediate download. PTC\nannotates PubMed (30 million abstracts), the PMC Open Access Subset and the Author Manuscript\nCollection (3 million full text articles). Updated entity identification methods and a\ndisambiguation module based on cutting-edge deep learning techniques provide increased accuracy."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for PubTator Central",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nPubTator Central (PTC, URL is a web service for\nexploring and retrieving bioconcept annotations in full text biomedical articles. PTC provides\nautomated annotations from state-of-the-art text mining systems for genes/proteins, genetic\nvariants, diseases, chemicals, species and cell lines, all available for immediate download. PTC\nannotates PubMed (30 million abstracts), the PMC Open Access Subset and the Author Manuscript\nCollection (3 million full text articles). Updated entity identification methods and a\ndisambiguation module based on cutting-edge deep learning techniques provide increased accuracy."
] |
cd40ac30e08078e50aba76901b53d0080395919b |
# Dataset Card for QUAERO
## Dataset Description
- **Homepage:** https://quaerofrenchmed.limsi.fr/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3].
A selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS):
1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures.
2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051)
The QUAERO French Medical Corpus BioC release comprises a subset of the QUAERO French Medical corpus, as follows:
Training data (BRAT version used in CLEF eHealth 2015 task 1b as training data):
- MEDLINE_train_bioc file: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_train_bioc file: 3 EMEA documents, segmented into 11 sub-documents, annotated with normalized entities in the BioC format
Development data (BRAT version used in CLEF eHealth 2015 task 1b as test data and in CLEF eHealth 2016 task 2 as development data):
- MEDLINE_dev_bioc file: 832 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_dev_bioc file: 3 EMEA documents, segmented into 12 sub-documents, annotated with normalized entities in the BioC format
Test data (BRAT version used in CLEF eHealth 2016 task 2 as test data):
- MEDLINE_test_bioc folder: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA folder_test_bioc: 4 EMEA documents, segmented into 15 sub-documents, annotated with normalized entities in the BioC format
This release of the QUAERO French medical corpus, BioC version, comes in the BioC format, through automatic conversion from the original BRAT format obtained with the Brat2BioC tool https://bitbucket.org/nicta_biomed/brat2bioc developped by Jimeno Yepes et al.
Antonio Jimeno Yepes, Mariana Neves, Karin Verspoor
Brat2BioC: conversion tool between brat and BioC
BioCreative IV track 1 - BioC: The BioCreative Interoperability Initiative, 2013
Please note that the original version of the QUAERO corpus distributed in the CLEF eHealth challenge 2015 and 2016 came in the BRAT stand alone format. It was distributed with the CLEF eHealth evaluation tool. This original distribution of the QUAERO French Medical corpus is available separately from https://quaerofrenchmed.limsi.fr
All questions regarding the task or data should be addressed to [email protected]
## Citation Information
```
@InProceedings{neveol14quaero,
author = {Névéol, Aurélie and Grouin, Cyril and Leixa, Jeremy
and Rosset, Sophie and Zweigenbaum, Pierre},
title = {The {QUAERO} {French} Medical Corpus: A Ressource for
Medical Entity Recognition and Normalization},
OPTbooktitle = {Proceedings of the Fourth Workshop on Building
and Evaluating Ressources for Health and Biomedical
Text Processing},
booktitle = {Proc of BioTextMining Work},
OPTseries = {BioTxtM 2014},
year = {2014},
pages = {24--30},
}
```
| bigbio/quaero | [
"multilinguality:monolingual",
"language:fr",
"license:other",
"region:us"
] | 2022-11-13T22:11:53+00:00 | {"language": ["fr"], "license": "other", "multilinguality": "monolingual", "pretty_name": "QUAERO", "bigbio_language": ["French"], "bigbio_license_shortname": "GFDL_1p3", "homepage": "https://quaerofrenchmed.limsi.fr/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:46:29+00:00 | [] | [
"fr"
] | TAGS
#multilinguality-monolingual #language-French #license-other #region-us
|
# Dataset Card for QUAERO
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
The QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3].
A selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS):
1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures.
2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051)
The QUAERO French Medical Corpus BioC release comprises a subset of the QUAERO French Medical corpus, as follows:
Training data (BRAT version used in CLEF eHealth 2015 task 1b as training data):
- MEDLINE_train_bioc file: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_train_bioc file: 3 EMEA documents, segmented into 11 sub-documents, annotated with normalized entities in the BioC format
Development data (BRAT version used in CLEF eHealth 2015 task 1b as test data and in CLEF eHealth 2016 task 2 as development data):
- MEDLINE_dev_bioc file: 832 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA_dev_bioc file: 3 EMEA documents, segmented into 12 sub-documents, annotated with normalized entities in the BioC format
Test data (BRAT version used in CLEF eHealth 2016 task 2 as test data):
- MEDLINE_test_bioc folder: 833 MEDLINE titles, annotated with normalized entities in the BioC format
- EMEA folder_test_bioc: 4 EMEA documents, segmented into 15 sub-documents, annotated with normalized entities in the BioC format
This release of the QUAERO French medical corpus, BioC version, comes in the BioC format, through automatic conversion from the original BRAT format obtained with the Brat2BioC tool URL developped by Jimeno Yepes et al.
Antonio Jimeno Yepes, Mariana Neves, Karin Verspoor
Brat2BioC: conversion tool between brat and BioC
BioCreative IV track 1 - BioC: The BioCreative Interoperability Initiative, 2013
Please note that the original version of the QUAERO corpus distributed in the CLEF eHealth challenge 2015 and 2016 came in the BRAT stand alone format. It was distributed with the CLEF eHealth evaluation tool. This original distribution of the QUAERO French Medical corpus is available separately from URL
All questions regarding the task or data should be addressed to URL@URL
| [
"# Dataset Card for QUAERO",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3].\n\nA selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS):\n\n1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures.\n\n2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051)\n\nThe QUAERO French Medical Corpus BioC release comprises a subset of the QUAERO French Medical corpus, as follows:\n\nTraining data (BRAT version used in CLEF eHealth 2015 task 1b as training data): \n- MEDLINE_train_bioc file: 833 MEDLINE titles, annotated with normalized entities in the BioC format \n- EMEA_train_bioc file: 3 EMEA documents, segmented into 11 sub-documents, annotated with normalized entities in the BioC format \n\nDevelopment data (BRAT version used in CLEF eHealth 2015 task 1b as test data and in CLEF eHealth 2016 task 2 as development data): \n- MEDLINE_dev_bioc file: 832 MEDLINE titles, annotated with normalized entities in the BioC format\n- EMEA_dev_bioc file: 3 EMEA documents, segmented into 12 sub-documents, annotated with normalized entities in the BioC format \n\nTest data (BRAT version used in CLEF eHealth 2016 task 2 as test data): \n- MEDLINE_test_bioc folder: 833 MEDLINE titles, annotated with normalized entities in the BioC format \n- EMEA folder_test_bioc: 4 EMEA documents, segmented into 15 sub-documents, annotated with normalized entities in the BioC format \n\n\n\nThis release of the QUAERO French medical corpus, BioC version, comes in the BioC format, through automatic conversion from the original BRAT format obtained with the Brat2BioC tool URL developped by Jimeno Yepes et al.\n\nAntonio Jimeno Yepes, Mariana Neves, Karin Verspoor \nBrat2BioC: conversion tool between brat and BioC\nBioCreative IV track 1 - BioC: The BioCreative Interoperability Initiative, 2013\n\n\nPlease note that the original version of the QUAERO corpus distributed in the CLEF eHealth challenge 2015 and 2016 came in the BRAT stand alone format. It was distributed with the CLEF eHealth evaluation tool. This original distribution of the QUAERO French Medical corpus is available separately from URL \n\nAll questions regarding the task or data should be addressed to URL@URL"
] | [
"TAGS\n#multilinguality-monolingual #language-French #license-other #region-us \n",
"# Dataset Card for QUAERO",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe QUAERO French Medical Corpus has been initially developed as a resource for named entity recognition and normalization [1]. It was then improved with the purpose of creating a gold standard set of normalized entities for French biomedical text, that was used in the CLEF eHealth evaluation lab [2][3].\n\nA selection of MEDLINE titles and EMEA documents were manually annotated. The annotation process was guided by concepts in the Unified Medical Language System (UMLS):\n\n1. Ten types of clinical entities, as defined by the following UMLS Semantic Groups (Bodenreider and McCray 2003) were annotated: Anatomy, Chemical and Drugs, Devices, Disorders, Geographic Areas, Living Beings, Objects, Phenomena, Physiology, Procedures.\n\n2. The annotations were made in a comprehensive fashion, so that nested entities were marked, and entities could be mapped to more than one UMLS concept. In particular: (a) If a mention can refer to more than one Semantic Group, all the relevant Semantic Groups should be annotated. For instance, the mention “récidive” (recurrence) in the phrase “prévention des récidives” (recurrence prevention) should be annotated with the category “DISORDER” (CUI C2825055) and the category “PHENOMENON” (CUI C0034897); (b) If a mention can refer to more than one UMLS concept within the same Semantic Group, all the relevant concepts should be annotated. For instance, the mention “maniaques” (obsessive) in the phrase “patients maniaques” (obsessive patients) should be annotated with CUIs C0564408 and C0338831 (category “DISORDER”); (c) Entities which span overlaps with that of another entity should still be annotated. For instance, in the phrase “infarctus du myocarde” (myocardial infarction), the mention “myocarde” (myocardium) should be annotated with category “ANATOMY” (CUI C0027061) and the mention “infarctus du myocarde” should be annotated with category “DISORDER” (CUI C0027051)\n\nThe QUAERO French Medical Corpus BioC release comprises a subset of the QUAERO French Medical corpus, as follows:\n\nTraining data (BRAT version used in CLEF eHealth 2015 task 1b as training data): \n- MEDLINE_train_bioc file: 833 MEDLINE titles, annotated with normalized entities in the BioC format \n- EMEA_train_bioc file: 3 EMEA documents, segmented into 11 sub-documents, annotated with normalized entities in the BioC format \n\nDevelopment data (BRAT version used in CLEF eHealth 2015 task 1b as test data and in CLEF eHealth 2016 task 2 as development data): \n- MEDLINE_dev_bioc file: 832 MEDLINE titles, annotated with normalized entities in the BioC format\n- EMEA_dev_bioc file: 3 EMEA documents, segmented into 12 sub-documents, annotated with normalized entities in the BioC format \n\nTest data (BRAT version used in CLEF eHealth 2016 task 2 as test data): \n- MEDLINE_test_bioc folder: 833 MEDLINE titles, annotated with normalized entities in the BioC format \n- EMEA folder_test_bioc: 4 EMEA documents, segmented into 15 sub-documents, annotated with normalized entities in the BioC format \n\n\n\nThis release of the QUAERO French medical corpus, BioC version, comes in the BioC format, through automatic conversion from the original BRAT format obtained with the Brat2BioC tool URL developped by Jimeno Yepes et al.\n\nAntonio Jimeno Yepes, Mariana Neves, Karin Verspoor \nBrat2BioC: conversion tool between brat and BioC\nBioCreative IV track 1 - BioC: The BioCreative Interoperability Initiative, 2013\n\n\nPlease note that the original version of the QUAERO corpus distributed in the CLEF eHealth challenge 2015 and 2016 came in the BRAT stand alone format. It was distributed with the CLEF eHealth evaluation tool. This original distribution of the QUAERO French Medical corpus is available separately from URL \n\nAll questions regarding the task or data should be addressed to URL@URL"
] |
03fa8bd94ef57fd1f6411d17b15c264ddabd0c16 |
# Dataset Card for SCAI Chemical
## Dataset Description
- **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
SCAI Chemical is a corpus of MEDLINE abstracts that has been annotated
to give an overview of the different chemical name classes
found in MEDLINE text.
## Citation Information
```
@inproceedings{kolarik:lrec-ws08,
author = {Kol{'a}{r}ik, Corinna and Klinger, Roman and Friedrich, Christoph M and Hofmann-Apitius, Martin and Fluck, Juliane},
title = {Chemical Names: {T}erminological Resources and Corpora Annotation},
booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining},
year = {2008},
}
```
| bigbio/scai_chemical | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:11:56+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "SCAI Chemical", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpora-for-chemical-entity-recognition.html", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:46:32+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for SCAI Chemical
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
SCAI Chemical is a corpus of MEDLINE abstracts that has been annotated
to give an overview of the different chemical name classes
found in MEDLINE text.
| [
"# Dataset Card for SCAI Chemical",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nSCAI Chemical is a corpus of MEDLINE abstracts that has been annotated\nto give an overview of the different chemical name classes\nfound in MEDLINE text."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for SCAI Chemical",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nSCAI Chemical is a corpus of MEDLINE abstracts that has been annotated\nto give an overview of the different chemical name classes\nfound in MEDLINE text."
] |
f6c472901388e52ccf5d5a7a24da403951eff5bb |
# Dataset Card for SCAI Disease
## Dataset Description
- **Homepage:** https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpus-for-disease-names-and-adverse-effects.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
SCAI Disease is a dataset annotated in 2010 with mentions of diseases and
adverse effects. It is a corpus containing 400 randomly selected MEDLINE
abstracts generated using ‘Disease OR Adverse effect’ as a PubMed query. This
evaluation corpus was annotated by two individuals who hold a Master’s degree
in life sciences.
## Citation Information
```
@inproceedings{gurulingappa:lrec-ws10,
author = {Harsha Gurulingappa and Roman Klinger and Martin Hofmann-Apitius and Juliane Fluck},
title = {An Empirical Evaluation of Resources for the Identification of Diseases and Adverse Effects in Biomedical Literature},
booktitle = {LREC Workshop on Building and Evaluating Resources for Biomedical Text Mining},
year = {2010},
}
```
| bigbio/scai_disease | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:12:00+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "SCAI Disease", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/corpus-for-disease-names-and-adverse-effects.html", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:46:35+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for SCAI Disease
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
SCAI Disease is a dataset annotated in 2010 with mentions of diseases and
adverse effects. It is a corpus containing 400 randomly selected MEDLINE
abstracts generated using ‘Disease OR Adverse effect’ as a PubMed query. This
evaluation corpus was annotated by two individuals who hold a Master’s degree
in life sciences.
| [
"# Dataset Card for SCAI Disease",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nSCAI Disease is a dataset annotated in 2010 with mentions of diseases and\nadverse effects. It is a corpus containing 400 randomly selected MEDLINE\nabstracts generated using ‘Disease OR Adverse effect’ as a PubMed query. This\nevaluation corpus was annotated by two individuals who hold a Master’s degree\nin life sciences."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for SCAI Disease",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nSCAI Disease is a dataset annotated in 2010 with mentions of diseases and\nadverse effects. It is a corpus containing 400 randomly selected MEDLINE\nabstracts generated using ‘Disease OR Adverse effect’ as a PubMed query. This\nevaluation corpus was annotated by two individuals who hold a Master’s degree\nin life sciences."
] |
292fbffb65a93962d99730d80fbd777a9a244d09 |
# Dataset Card for SciCite
## Dataset Description
- **Homepage:** https://allenai.org/data/scicite
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
SciCite is a dataset of 11K manually annotated citation intents based on
citation context in the computer science and biomedical domains.
## Citation Information
```
@inproceedings{cohan:naacl19,
author = {Arman Cohan and Waleed Ammar and Madeleine van Zuylen and Field Cady},
title = {Structural Scaffolds for Citation Intent Classification in Scientific Publications},
booktitle = {Conference of the North American Chapter of the Association for Computational Linguistics},
year = {2019},
url = {https://aclanthology.org/N19-1361/},
doi = {10.18653/v1/N19-1361},
}
```
| bigbio/scicite | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:12:03+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "SciCite", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://allenai.org/data/scicite", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:46:37+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for SciCite
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TXTCLASS
SciCite is a dataset of 11K manually annotated citation intents based on
citation context in the computer science and biomedical domains.
| [
"# Dataset Card for SciCite",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\nSciCite is a dataset of 11K manually annotated citation intents based on\ncitation context in the computer science and biomedical domains."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for SciCite",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\nSciCite is a dataset of 11K manually annotated citation intents based on\ncitation context in the computer science and biomedical domains."
] |
7cdf4fc4cad236980814e3468f2d7fb8236fe261 |
# Dataset Card for SciELO
## Dataset Description
- **Homepage:** https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB
- **Pubmed:** False
- **Public:** True
- **Tasks:** TRANSL
A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm.
## Citation Information
```
@inproceedings{soares2018large,
title = {A Large Parallel Corpus of Full-Text Scientific Articles},
author = {Soares, Felipe and Moreira, Viviane and Becker, Karin},
year = 2018,
booktitle = {
Proceedings of the Eleventh International Conference on Language Resources
and Evaluation (LREC-2018)
}
}
```
| bigbio/scielo | [
"multilinguality:multilingual",
"language:en",
"language:es",
"language:pt",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:12:07+00:00 | {"language": ["en", "es", "pt"], "license": "cc-by-4.0", "multilinguality": "multilingual", "pretty_name": "SciELO", "bigbio_language": ["English", "Spanish", "Portuguese"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://sites.google.com/view/felipe-soares/datasets#h.p_92uSCyAjWSRB", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TRANSLATION"]} | 2022-12-22T15:46:40+00:00 | [] | [
"en",
"es",
"pt"
] | TAGS
#multilinguality-multilingual #language-English #language-Spanish #language-Portuguese #license-cc-by-4.0 #region-us
|
# Dataset Card for SciELO
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TRANSL
A parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm.
| [
"# Dataset Card for SciELO",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TRANSL\n\n\nA parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm."
] | [
"TAGS\n#multilinguality-multilingual #language-English #language-Spanish #language-Portuguese #license-cc-by-4.0 #region-us \n",
"# Dataset Card for SciELO",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TRANSL\n\n\nA parallel corpus of full-text scientific articles collected from Scielo database in the following languages: English, Portuguese and Spanish. The corpus is sentence aligned for all language pairs, as well as trilingual aligned for a small subset of sentences. Alignment was carried out using the Hunalign algorithm."
] |
f75ae356668b896f3816d41d90622775c85c9d1a |
# Dataset Card for SciFact
## Dataset Description
- **Homepage:** https://scifact.apps.allenai.org/
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXT2CLASS
### Scifact Corpus Source
SciFact is a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
This config has abstracts and document ids.
### Scifact Claims Source
{_DESCRIPTION_BASE} This config connects the claims to the evidence and doc ids.
### Scifact Rationale Bigbio Pairs
{_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from ("rationale", "not_rationale") indicating if the span is evidence (can be supporting or refuting) for the claim. This roughly corresponds to the second task outlined in Section 5 of the paper."
### Scifact Labelprediction Bigbio Pairs
{_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from ("SUPPORT", "NOINFO", "CONTRADICT") indicating if the span supports, provides no info, or contradicts the claim. This roughly corresponds to the thrid task outlined in Section 5 of the paper.
## Citation Information
```
@article{wadden2020fact,
author = {David Wadden and Shanchuan Lin and Kyle Lo and Lucy Lu Wang and Madeleine van Zuylen and Arman Cohan and Hannaneh Hajishirzi},
title = {Fact or Fiction: Verifying Scientific Claims},
year = {2020},
address = {Online},
publisher = {Association for Computational Linguistics},
url = {https://aclanthology.org/2020.emnlp-main.609},
doi = {10.18653/v1/2020.emnlp-main.609},
pages = {7534--7550},
biburl = {},
bibsource = {}
}
```
| bigbio/scifact | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-2.0",
"region:us"
] | 2022-11-13T22:12:10+00:00 | {"language": ["en"], "license": "cc-by-nc-2.0", "multilinguality": "monolingual", "pretty_name": "SciFact", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_NC_2p0", "homepage": "https://scifact.apps.allenai.org/", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXT_PAIRS_CLASSIFICATION"]} | 2022-12-22T15:46:44+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-nc-2.0 #region-us
|
# Dataset Card for SciFact
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TXT2CLASS
### Scifact Corpus Source
SciFact is a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.
This config has abstracts and document ids.
### Scifact Claims Source
{_DESCRIPTION_BASE} This config connects the claims to the evidence and doc ids.
### Scifact Rationale Bigbio Pairs
{_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from ("rationale", "not_rationale") indicating if the span is evidence (can be supporting or refuting) for the claim. This roughly corresponds to the second task outlined in Section 5 of the paper."
### Scifact Labelprediction Bigbio Pairs
{_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from ("SUPPORT", "NOINFO", "CONTRADICT") indicating if the span supports, provides no info, or contradicts the claim. This roughly corresponds to the thrid task outlined in Section 5 of the paper.
| [
"# Dataset Card for SciFact",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXT2CLASS",
"### Scifact Corpus Source\n\n SciFact is a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.\n This config has abstracts and document ids.",
"### Scifact Claims Source\n\n {_DESCRIPTION_BASE} This config connects the claims to the evidence and doc ids.",
"### Scifact Rationale Bigbio Pairs\n\n {_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from (\"rationale\", \"not_rationale\") indicating if the span is evidence (can be supporting or refuting) for the claim. This roughly corresponds to the second task outlined in Section 5 of the paper.\"",
"### Scifact Labelprediction Bigbio Pairs\n\n {_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from (\"SUPPORT\", \"NOINFO\", \"CONTRADICT\") indicating if the span supports, provides no info, or contradicts the claim. This roughly corresponds to the thrid task outlined in Section 5 of the paper."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-nc-2.0 #region-us \n",
"# Dataset Card for SciFact",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXT2CLASS",
"### Scifact Corpus Source\n\n SciFact is a dataset of 1.4K expert-written scientific claims paired with evidence-containing abstracts, and annotated with labels and rationales.\n This config has abstracts and document ids.",
"### Scifact Claims Source\n\n {_DESCRIPTION_BASE} This config connects the claims to the evidence and doc ids.",
"### Scifact Rationale Bigbio Pairs\n\n {_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from (\"rationale\", \"not_rationale\") indicating if the span is evidence (can be supporting or refuting) for the claim. This roughly corresponds to the second task outlined in Section 5 of the paper.\"",
"### Scifact Labelprediction Bigbio Pairs\n\n {_DESCRIPTION_BASE} This task is the following: given a claim and a text span composed of one or more sentences from an abstract, predict a label from (\"SUPPORT\", \"NOINFO\", \"CONTRADICT\") indicating if the span supports, provides no info, or contradicts the claim. This roughly corresponds to the thrid task outlined in Section 5 of the paper."
] |
866ceacf2a904cb03a78f1bf2a6c6fe1c28ada63 |
# Dataset Card for SciQ
## Dataset Description
- **Homepage:** https://allenai.org/data/sciq
- **Pubmed:** False
- **Public:** True
- **Tasks:** QA
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For most questions, an additional paragraph with supporting evidence for the correct answer is provided.
## Citation Information
```
@inproceedings{welbl-etal-2017-crowdsourcing,
title = "Crowdsourcing Multiple Choice Science Questions",
author = "Welbl, Johannes and
Liu, Nelson F. and
Gardner, Matt",
booktitle = "Proceedings of the 3rd Workshop on Noisy User-generated Text",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W17-4413",
doi = "10.18653/v1/W17-4413",
pages = "94--106",
}
```
| bigbio/sciq | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-3.0",
"region:us"
] | 2022-11-13T22:12:14+00:00 | {"language": ["en"], "license": "cc-by-nc-3.0", "multilinguality": "monolingual", "pretty_name": "SciQ", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_NC_3p0", "homepage": "https://allenai.org/data/sciq", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["QUESTION_ANSWERING"]} | 2022-12-22T15:46:48+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-nc-3.0 #region-us
|
# Dataset Card for SciQ
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: QA
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For most questions, an additional paragraph with supporting evidence for the correct answer is provided.
| [
"# Dataset Card for SciQ",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: QA\n\n\nThe SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For most questions, an additional paragraph with supporting evidence for the correct answer is provided."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-nc-3.0 #region-us \n",
"# Dataset Card for SciQ",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: QA\n\n\nThe SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For most questions, an additional paragraph with supporting evidence for the correct answer is provided."
] |
7a61469b351b38a5e4307b3700f98f99869eeb71 |
# Dataset Card for SETH Corpus
## Dataset Description
- **Homepage:** https://github.com/rockt/SETH
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE
SNP named entity recognition corpus consisting of 630 PubMed citations.
## Citation Information
```
@Article{SETH2016,
Title = {SETH detects and normalizes genetic variants in text.},
Author = {Thomas, Philippe and Rockt{"{a}}schel, Tim and Hakenberg, J{"{o}}rg and Lichtblau, Yvonne and Leser, Ulf},
Journal = {Bioinformatics},
Year = {2016},
Month = {Jun},
Doi = {10.1093/bioinformatics/btw234},
Language = {eng},
Medline-pst = {aheadofprint},
Pmid = {27256315},
Url = {http://dx.doi.org/10.1093/bioinformatics/btw234
}
```
| bigbio/seth_corpus | [
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-11-13T22:12:17+00:00 | {"language": ["en"], "license": "apache-2.0", "multilinguality": "monolingual", "pretty_name": "SETH Corpus", "bigbio_language": ["English"], "bigbio_license_shortname": "APACHE_2p0", "homepage": "https://github.com/rockt/SETH", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2022-12-22T15:46:51+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-apache-2.0 #region-us
|
# Dataset Card for SETH Corpus
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,RE
SNP named entity recognition corpus consisting of 630 PubMed citations.
| [
"# Dataset Card for SETH Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nSNP named entity recognition corpus consisting of 630 PubMed citations."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for SETH Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nSNP named entity recognition corpus consisting of 630 PubMed citations."
] |
91d8887f64d873801aa78653690d9d2592374356 |
# Dataset Card for SPL ADR
## Dataset Description
- **Homepage:** https://bionlp.nlm.nih.gov/tac2017adversereactions/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED,RE
The United States Food and Drug Administration (FDA) partnered with the National Library
of Medicine to create a pilot dataset containing standardised information about known
adverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs),
the documents FDA uses to exchange information about drugs and other products, were
manually annotated for adverse reactions at the mention level to facilitate development
and evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were
then normalised to the Unified Medical Language System (UMLS) and to the Medical
Dictionary for Regulatory Activities (MedDRA).
## Citation Information
```
@article{demner2018dataset,
author = {Demner-Fushman, Dina and Shooshan, Sonya and Rodriguez, Laritza and Aronson,
Alan and Lang, Francois and Rogers, Willie and Roberts, Kirk and Tonning, Joseph},
title = {A dataset of 200 structured product labels annotated for adverse drug reactions},
journal = {Scientific Data},
volume = {5},
year = {2018},
month = {01},
pages = {180001},
url = {
https://www.researchgate.net/publication/322810855_A_dataset_of_200_structured_product_labels_annotated_for_adverse_drug_reactions
},
doi = {10.1038/sdata.2018.1}
}
```
| bigbio/spl_adr_200db | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-11-13T22:12:21+00:00 | {"language": ["en"], "license": "cc0-1.0", "multilinguality": "monolingual", "pretty_name": "SPL ADR", "bigbio_language": ["English"], "bigbio_license_shortname": "CC0_1p0", "homepage": "https://bionlp.nlm.nih.gov/tac2017adversereactions/", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION", "RELATION_EXTRACTION"]} | 2022-12-22T15:46:56+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for SPL ADR
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: NER,NED,RE
The United States Food and Drug Administration (FDA) partnered with the National Library
of Medicine to create a pilot dataset containing standardised information about known
adverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs),
the documents FDA uses to exchange information about drugs and other products, were
manually annotated for adverse reactions at the mention level to facilitate development
and evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were
then normalised to the Unified Medical Language System (UMLS) and to the Medical
Dictionary for Regulatory Activities (MedDRA).
| [
"# Dataset Card for SPL ADR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,NED,RE\n\n\nThe United States Food and Drug Administration (FDA) partnered with the National Library\nof Medicine to create a pilot dataset containing standardised information about known\nadverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs),\nthe documents FDA uses to exchange information about drugs and other products, were\nmanually annotated for adverse reactions at the mention level to facilitate development\nand evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were\nthen normalised to the Unified Medical Language System (UMLS) and to the Medical\nDictionary for Regulatory Activities (MedDRA)."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for SPL ADR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,NED,RE\n\n\nThe United States Food and Drug Administration (FDA) partnered with the National Library\nof Medicine to create a pilot dataset containing standardised information about known\nadverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs),\nthe documents FDA uses to exchange information about drugs and other products, were\nmanually annotated for adverse reactions at the mention level to facilitate development\nand evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were\nthen normalised to the Unified Medical Language System (UMLS) and to the Medical\nDictionary for Regulatory Activities (MedDRA)."
] |
6f92006c68f5f4d38a00efebf5abc5edd51c51cd |
# Dataset Card for Swedish Medical NER
## Dataset Description
- **Homepage:** https://github.com/olofmogren/biomedical-ner-data-swedish/
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER
swedish_medical_ner is Named Entity Recognition dataset on medical text in Swedish.
It consists three subsets which are in turn derived from three different sources
respectively: the Swedish Wikipedia (a.k.a. wiki), Läkartidningen (a.k.a. lt),
and 1177 Vårdguiden (a.k.a. 1177). While the Swedish Wikipedia and Läkartidningen
subsets in total contains over 790000 sequences with 60 characters each,
the 1177 Vårdguiden subset is manually annotated and contains 927 sentences,
2740 annotations, out of which 1574 are disorder and findings, 546 are
pharmaceutical drug, and 620 are body structure.
Texts from both Swedish Wikipedia and Läkartidningen were automatically annotated
using a list of medical seed terms. Sentences from 1177 Vårdguiden were manuually
annotated.
## Citation Information
```
@inproceedings{almgren-etal-2016-named,
author = {
Almgren, Simon and
Pavlov, Sean and
Mogren, Olof
},
title = {Named Entity Recognition in Swedish Medical Journals with Deep Bidirectional Character-Based LSTMs},
booktitle = {Proceedings of the Fifth Workshop on Building and Evaluating Resources for Biomedical Text Mining (BioTxtM 2016)},
publisher = {The COLING 2016 Organizing Committee},
pages = {30-39},
year = {2016},
month = {12},
url = {https://aclanthology.org/W16-5104},
eprint = {https://aclanthology.org/W16-5104.pdf}
}
```
| bigbio/swedish_medical_ner | [
"multilinguality:monolingual",
"language:sv",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-11-13T22:12:24+00:00 | {"language": ["sv"], "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "pretty_name": "Swedish Medical NER", "bigbio_language": ["Swedish"], "bigbio_license_shortname": "CC_BY_SA_4p0", "homepage": "https://github.com/olofmogren/biomedical-ner-data-swedish/", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:46:59+00:00 | [] | [
"sv"
] | TAGS
#multilinguality-monolingual #language-Swedish #license-cc-by-sa-4.0 #region-us
|
# Dataset Card for Swedish Medical NER
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: NER
swedish_medical_ner is Named Entity Recognition dataset on medical text in Swedish.
It consists three subsets which are in turn derived from three different sources
respectively: the Swedish Wikipedia (a.k.a. wiki), Läkartidningen (a.k.a. lt),
and 1177 Vårdguiden (a.k.a. 1177). While the Swedish Wikipedia and Läkartidningen
subsets in total contains over 790000 sequences with 60 characters each,
the 1177 Vårdguiden subset is manually annotated and contains 927 sentences,
2740 annotations, out of which 1574 are disorder and findings, 546 are
pharmaceutical drug, and 620 are body structure.
Texts from both Swedish Wikipedia and Läkartidningen were automatically annotated
using a list of medical seed terms. Sentences from 1177 Vårdguiden were manuually
annotated.
| [
"# Dataset Card for Swedish Medical NER",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER\n\n\nswedish_medical_ner is Named Entity Recognition dataset on medical text in Swedish. \nIt consists three subsets which are in turn derived from three different sources \nrespectively: the Swedish Wikipedia (a.k.a. wiki), Läkartidningen (a.k.a. lt), \nand 1177 Vårdguiden (a.k.a. 1177). While the Swedish Wikipedia and Läkartidningen \nsubsets in total contains over 790000 sequences with 60 characters each, \nthe 1177 Vårdguiden subset is manually annotated and contains 927 sentences, \n2740 annotations, out of which 1574 are disorder and findings, 546 are \npharmaceutical drug, and 620 are body structure.\n\nTexts from both Swedish Wikipedia and Läkartidningen were automatically annotated \nusing a list of medical seed terms. Sentences from 1177 Vårdguiden were manuually \nannotated."
] | [
"TAGS\n#multilinguality-monolingual #language-Swedish #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for Swedish Medical NER",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER\n\n\nswedish_medical_ner is Named Entity Recognition dataset on medical text in Swedish. \nIt consists three subsets which are in turn derived from three different sources \nrespectively: the Swedish Wikipedia (a.k.a. wiki), Läkartidningen (a.k.a. lt), \nand 1177 Vårdguiden (a.k.a. 1177). While the Swedish Wikipedia and Läkartidningen \nsubsets in total contains over 790000 sequences with 60 characters each, \nthe 1177 Vårdguiden subset is manually annotated and contains 927 sentences, \n2740 annotations, out of which 1574 are disorder and findings, 546 are \npharmaceutical drug, and 620 are body structure.\n\nTexts from both Swedish Wikipedia and Läkartidningen were automatically annotated \nusing a list of medical seed terms. Sentences from 1177 Vårdguiden were manuually \nannotated."
] |
0f77d5d705ae911c4f35944deea346287095e6f0 |
# Dataset Card for tmVar v1
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds. It can be used for NER tasks only.
The dataset is split into train(334) and test(166) splits
## Citation Information
```
@article{wei2013tmvar,
title={tmVar: a text mining approach for extracting sequence variants in biomedical literature},
author={Wei, Chih-Hsuan and Harris, Bethany R and Kao, Hung-Yu and Lu, Zhiyong},
journal={Bioinformatics},
volume={29},
number={11},
pages={1433--1439},
year={2013},
publisher={Oxford University Press}
}
```
| bigbio/tmvar_v1 | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:12:28+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "tmVar v1", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:47:01+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for tmVar v1
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds. It can be used for NER tasks only.
The dataset is split into train(334) and test(166) splits
| [
"# Dataset Card for tmVar v1",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThis dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds. It can be used for NER tasks only.\nThe dataset is split into train(334) and test(166) splits"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for tmVar v1",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThis dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds. It can be used for NER tasks only.\nThe dataset is split into train(334) and test(166) splits"
] |
fc667429f43a45b6d762c5857c5b63001045ea7d |
# Dataset Card for tmVar v2
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
This dataset contains 158 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them.
It can be used for NER tasks and NED tasks, This dataset has a single split
## Citation Information
```
@article{wei2018tmvar,
title={tmVar 2.0: integrating genomic variant information from literature with dbSNP and ClinVar for precision medicine},
author={Wei, Chih-Hsuan and Phan, Lon and Feltz, Juliana and Maiti, Rama and Hefferon, Tim and Lu, Zhiyong},
journal={Bioinformatics},
volume={34},
number={1},
pages={80--87},
year={2018},
publisher={Oxford University Press}
}
```
| bigbio/tmvar_v2 | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:12:31+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "tmVar v2", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:47:06+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for tmVar v2
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
This dataset contains 158 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them.
It can be used for NER tasks and NED tasks, This dataset has a single split
| [
"# Dataset Card for tmVar v2",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThis dataset contains 158 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them.\nIt can be used for NER tasks and NED tasks, This dataset has a single split"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for tmVar v2",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThis dataset contains 158 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them.\nIt can be used for NER tasks and NED tasks, This dataset has a single split"
] |
02004b5303fe6eafeae600662b3cbcc274723a58 |
# Dataset Card for tmVar v3
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits.
## Citation Information
```
@misc{https://doi.org/10.48550/arxiv.2204.03637,
title = {tmVar 3.0: an improved variant concept recognition and normalization tool},
author = {
Wei, Chih-Hsuan and Allot, Alexis and Riehle, Kevin and Milosavljevic,
Aleksandar and Lu, Zhiyong
},
year = 2022,
publisher = {arXiv},
doi = {10.48550/ARXIV.2204.03637},
url = {https://arxiv.org/abs/2204.03637},
copyright = {Creative Commons Attribution 4.0 International},
keywords = {
Computation and Language (cs.CL), FOS: Computer and information sciences,
FOS: Computer and information sciences
}
}
```
| bigbio/tmvar_v3 | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"arxiv:2204.03637",
"region:us"
] | 2022-11-13T22:12:35+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "tmVar v3", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/tmvar/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2023-02-17T14:55:58+00:00 | [
"2204.03637"
] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #arxiv-2204.03637 #region-us
|
# Dataset Card for tmVar v3
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
This dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits.
| [
"# Dataset Card for tmVar v3",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThis dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #arxiv-2204.03637 #region-us \n",
"# Dataset Card for tmVar v3",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThis dataset contains 500 PubMed articles manually annotated with mutation mentions of various kinds and dbsnp normalizations for each of them. In addition, it contains variant normalization options such as allele-specific identifiers from the ClinGen Allele Registry It can be used for NER tasks and NED tasks, This dataset does NOT have splits."
] |
a87224f51261b436e67f7b0888508431a71177cc |
# Dataset Card for TwADR-L
## Dataset Description
- **Homepage:** https://zenodo.org/record/55013
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED
The TwADR-L dataset contains medical concepts written on social media (Twitter) mapped to how they are formally written in medical ontologies (SIDER 4).
## Citation Information
```
@inproceedings{limsopatham-collier-2016-normalising,
title = "Normalising Medical Concepts in Social Media Texts by Learning Semantic Representation",
author = "Limsopatham, Nut and
Collier, Nigel",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2016",
address = "Berlin, Germany",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P16-1096",
doi = "10.18653/v1/P16-1096",
pages = "1014--1023",
}
```
| bigbio/twadrl | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:12:38+00:00 | {"language": ["en"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "TwADR-L", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://zenodo.org/record/55013", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:47:15+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for TwADR-L
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: NER,NED
The TwADR-L dataset contains medical concepts written on social media (Twitter) mapped to how they are formally written in medical ontologies (SIDER 4).
| [
"# Dataset Card for TwADR-L",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,NED\n\n\n\nThe TwADR-L dataset contains medical concepts written on social media (Twitter) mapped to how they are formally written in medical ontologies (SIDER 4)."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for TwADR-L",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,NED\n\n\n\nThe TwADR-L dataset contains medical concepts written on social media (Twitter) mapped to how they are formally written in medical ontologies (SIDER 4)."
] |
5a7a75c8969a3afcc8bd6c9422532ac02ec3fd09 |
# Dataset Card for UMNSRS
## Dataset Description
- **Homepage:** https://conservancy.umn.edu/handle/11299/196265/
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
UMNSRS, developed by Pakhomov, et al., consists of 725 clinical term pairs whose semantic similarity and relatedness.
The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch
a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness.
The following subsets are available:
- similarity: A set of 566 UMLS concept pairs manually rated for semantic similarity (e.g. whale-dolphin) using a
continuous response scale.
- relatedness: A set of 588 UMLS concept pairs manually rated for semantic relatedness (e.g. needle-thread) using a
continuous response scale.
- similarity_mod: Modification of the UMNSRS-Similarity dataset to exclude control samples and those pairs that did not
match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper (Corpus
Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley, Reed McEwan,
Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644). The resulting dataset contains 449 pairs.
- relatedness_mod: Modification of the UMNSRS-Relatedness dataset to exclude control samples and those pairs that did
not match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper
(Corpus Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley,
Reed McEwan, Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644).
The resulting dataset contains 458 pairs.
## Citation Information
```
@inproceedings{pakhomov2010semantic,
title={Semantic similarity and relatedness between clinical terms: an experimental study},
author={Pakhomov, Serguei and McInnes, Bridget and Adam, Terrence and Liu, Ying and Pedersen, Ted and Melton, Genevieve B},
booktitle={AMIA annual symposium proceedings},
volume={2010},
pages={572},
year={2010},
organization={American Medical Informatics Association}
}
```
| bigbio/umnsrs | [
"multilinguality:monolingual",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2022-11-13T22:12:42+00:00 | {"language": ["en"], "license": "cc0-1.0", "multilinguality": "monolingual", "pretty_name": "UMNSRS", "bigbio_language": ["English"], "bigbio_license_shortname": "CC0_1p0", "homepage": "https://conservancy.umn.edu/handle/11299/196265/", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]} | 2022-12-22T15:47:36+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for UMNSRS
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: STS
UMNSRS, developed by Pakhomov, et al., consists of 725 clinical term pairs whose semantic similarity and relatedness.
The similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch
a bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness.
The following subsets are available:
- similarity: A set of 566 UMLS concept pairs manually rated for semantic similarity (e.g. whale-dolphin) using a
continuous response scale.
- relatedness: A set of 588 UMLS concept pairs manually rated for semantic relatedness (e.g. needle-thread) using a
continuous response scale.
- similarity_mod: Modification of the UMNSRS-Similarity dataset to exclude control samples and those pairs that did not
match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper (Corpus
Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley, Reed McEwan,
Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644). The resulting dataset contains 449 pairs.
- relatedness_mod: Modification of the UMNSRS-Relatedness dataset to exclude control samples and those pairs that did
not match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper
(Corpus Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley,
Reed McEwan, Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644).
The resulting dataset contains 458 pairs.
| [
"# Dataset Card for UMNSRS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nUMNSRS, developed by Pakhomov, et al., consists of 725 clinical term pairs whose semantic similarity and relatedness.\nThe similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch\na bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness.\nThe following subsets are available:\n- similarity: A set of 566 UMLS concept pairs manually rated for semantic similarity (e.g. whale-dolphin) using a\n continuous response scale.\n- relatedness: A set of 588 UMLS concept pairs manually rated for semantic relatedness (e.g. needle-thread) using a\n continuous response scale.\n- similarity_mod: Modification of the UMNSRS-Similarity dataset to exclude control samples and those pairs that did not\n match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper (Corpus\n Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley, Reed McEwan,\n Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644). The resulting dataset contains 449 pairs.\n- relatedness_mod: Modification of the UMNSRS-Relatedness dataset to exclude control samples and those pairs that did\n not match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper\n (Corpus Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley,\n Reed McEwan, Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644).\n The resulting dataset contains 458 pairs."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for UMNSRS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nUMNSRS, developed by Pakhomov, et al., consists of 725 clinical term pairs whose semantic similarity and relatedness.\nThe similarity and relatedness of each term pair was annotated based on a continuous scale by having the resident touch\na bar on a touch sensitive computer screen to indicate the degree of similarity or relatedness.\nThe following subsets are available:\n- similarity: A set of 566 UMLS concept pairs manually rated for semantic similarity (e.g. whale-dolphin) using a\n continuous response scale.\n- relatedness: A set of 588 UMLS concept pairs manually rated for semantic relatedness (e.g. needle-thread) using a\n continuous response scale.\n- similarity_mod: Modification of the UMNSRS-Similarity dataset to exclude control samples and those pairs that did not\n match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper (Corpus\n Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley, Reed McEwan,\n Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644). The resulting dataset contains 449 pairs.\n- relatedness_mod: Modification of the UMNSRS-Relatedness dataset to exclude control samples and those pairs that did\n not match text in clinical, biomedical and general English corpora. Exact modifications are detailed in the paper\n (Corpus Domain Effects on Distributional Semantic Modeling of Medical Terms. Serguei V.S. Pakhomov, Greg Finley,\n Reed McEwan, Yan Wang, and Genevieve B. Melton. Bioinformatics. 2016; 32(23):3635-3644).\n The resulting dataset contains 458 pairs."
] |
174cfc42c5a5316b67904ba1ad7fce5fe00205cc |
# Dataset Card for Verspoor 2013
## Dataset Description
- **Homepage:** NA
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE
This dataset contains annotations for a small corpus of full text journal publications on the subject of inherited colorectal cancer. It is suitable for Named Entity Recognition and Relation Extraction tasks. It uses the Variome Annotation Schema, a schema that aims to capture the core concepts and relations relevant to cataloguing and interpreting human genetic variation and its relationship to disease, as described in the published literature. The schema was inspired by the needs of the database curators of the International Society for Gastrointestinal Hereditary Tumours (InSiGHT) database, but is intended to have application to genetic variation information in a range of diseases.
## Citation Information
```
@article{verspoor2013annotating,
title = {Annotating the biomedical literature for the human variome},
author = {
Verspoor, Karin and Jimeno Yepes, Antonio and Cavedon, Lawrence and
McIntosh, Tara and Herten-Crabb, Asha and Thomas, Zo{"e} and Plazzer,
John-Paul
},
year = 2013,
journal = {Database},
publisher = {Oxford Academic},
volume = 2013
}
```
| bigbio/verspoor_2013 | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:12:45+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "Verspoor 2013", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "NA", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2022-12-22T15:47:37+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for Verspoor 2013
## Dataset Description
- Homepage: NA
- Pubmed: True
- Public: True
- Tasks: NER,RE
This dataset contains annotations for a small corpus of full text journal publications on the subject of inherited colorectal cancer. It is suitable for Named Entity Recognition and Relation Extraction tasks. It uses the Variome Annotation Schema, a schema that aims to capture the core concepts and relations relevant to cataloguing and interpreting human genetic variation and its relationship to disease, as described in the published literature. The schema was inspired by the needs of the database curators of the International Society for Gastrointestinal Hereditary Tumours (InSiGHT) database, but is intended to have application to genetic variation information in a range of diseases.
| [
"# Dataset Card for Verspoor 2013",
"## Dataset Description\n\n- Homepage: NA\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nThis dataset contains annotations for a small corpus of full text journal publications on the subject of inherited colorectal cancer. It is suitable for Named Entity Recognition and Relation Extraction tasks. It uses the Variome Annotation Schema, a schema that aims to capture the core concepts and relations relevant to cataloguing and interpreting human genetic variation and its relationship to disease, as described in the published literature. The schema was inspired by the needs of the database curators of the International Society for Gastrointestinal Hereditary Tumours (InSiGHT) database, but is intended to have application to genetic variation information in a range of diseases."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for Verspoor 2013",
"## Dataset Description\n\n- Homepage: NA\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nThis dataset contains annotations for a small corpus of full text journal publications on the subject of inherited colorectal cancer. It is suitable for Named Entity Recognition and Relation Extraction tasks. It uses the Variome Annotation Schema, a schema that aims to capture the core concepts and relations relevant to cataloguing and interpreting human genetic variation and its relationship to disease, as described in the published literature. The schema was inspired by the needs of the database curators of the International Society for Gastrointestinal Hereditary Tumours (InSiGHT) database, but is intended to have application to genetic variation information in a range of diseases."
] |
16e1f83ef950f8265fc0a9e6628d43096ab6a22f |
# Dataset Card for Danish WIT
## Dataset Description
- **Repository:** <https://gist.github.com/saattrupdan/bb6c9c52d9f4b35258db2b2456d31224>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:[email protected])
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
### Dataset Summary
Google presented the Wikipedia Image Text (WIT) dataset in [July
2021](https://dl.acm.org/doi/abs/10.1145/3404835.3463257), a dataset which contains
scraped images from Wikipedia along with their descriptions. WikiMedia released
WIT-Base in [September
2021](https://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/),
being a modified version of WIT where they have removed the images with empty
"reference descriptions", as well as removing images where a person's face covers more
than 10% of the image surface, along with inappropriate images that are candidate for
deletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of
roughly 160,000 images with associated Danish descriptions. We release the dataset
under the [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/), in
accordance with WIT-Base's [identical
license](https://huggingface.co/datasets/wikimedia/wit_base#licensing-information).
### Supported Tasks and Leaderboards
Training machine learning models for caption generation, zero-shot image classification
and text-image search are the intended tasks for this dataset. No leaderboard is active
at this point.
### Languages
The dataset is available in Danish (`da`).
## Dataset Structure
### Data Instances
- **Size of downloaded dataset files:** 7.5 GB
- **Size of the generated dataset:** 7.8 GB
- **Total amount of disk used:** 15.3 GB
An example from the `train` split looks as follows.
```
{
"image": [PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=300x409 at 0x7FE4384E2190],
"image_url": "https://upload.wikimedia.org/wikipedia/commons/4/45/Bispen_-_inside.jpg",
"embedding": [2.8568285, 2.9562542, 0.33794892, 8.753725, ...],
"metadata_url": "http://commons.wikimedia.org/wiki/File:Bispen_-_inside.jpg",
"original_height": 3161,
"original_width": 2316,
"mime_type": "image/jpeg",
"caption_attribution_description": "Kulturhuset Bispen set indefra. Biblioteket er til venstre",
"page_url": "https://da.wikipedia.org/wiki/Bispen",
"attribution_passes_lang_id": True,
"caption_alt_text_description": None,
"caption_reference_description": "Bispen set indefra fra 1. sal, hvor ....",
"caption_title_and_reference_description": "Bispen [SEP] Bispen set indefra ...",
"context_page_description": "Bispen er navnet på det offentlige kulturhus i ...",
"context_section_description": "Bispen er navnet på det offentlige kulturhus i ...",
"hierarchical_section_title": "Bispen",
"is_main_image": True,
"page_changed_recently": True,
"page_title": "Bispen",
"section_title": None
}
```
### Data Fields
The data fields are the same among all splits.
- `image`: an `Image` feature.
- `image_url`: a `str` feature.
- `embedding`: a `list` feature.
- `metadata_url`: a `str` feature.
- `original_height`: an `int` or `NaN` feature.
- `original_width`: an `int` or `NaN` feature.
- `mime_type`: a `str` or `None` feature.
- `caption_attribution_description`: a `str` or `None` feature.
- `page_url`: a `str` feature.
- `attribution_passes_lang_id`: a `bool` or `None` feature.
- `caption_alt_text_description`: a `str` or `None` feature.
- `caption_reference_description`: a `str` or `None` feature.
- `caption_title_and_reference_description`: a `str` or `None` feature.
- `context_page_description`: a `str` or `None` feature.
- `context_section_description`: a `str` or `None` feature.
- `hierarchical_section_title`: a `str` feature.
- `is_main_image`: a `bool` or `None` feature.
- `page_changed_recently`: a `bool` or `None` feature.
- `page_title`: a `str` feature.
- `section_title`: a `str` or `None` feature.
### Data Splits
Roughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split
the resulting 168,740 samples into a training set, validation set and testing set of
the following sizes:
| split | samples |
|---------|--------:|
| train | 167,460 |
| val | 256 |
| test | 1,024 |
## Dataset Creation
### Curation Rationale
It is quite cumbersome to extract the Danish portion of the WIT-Base dataset,
especially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT
is purely to make it easier to work with the Danish portion of it.
### Source Data
The original data was collected from WikiMedia's
[WIT-Base](https://huggingface.co/datasets/wikimedia/wit_base) dataset, which in turn
comes from Google's [WIT](https://huggingface.co/datasets/google/wit) dataset.
## Additional Information
### Dataset Curators
[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) curated this dataset.
### Licensing Information
The dataset is licensed under the [CC BY-SA 4.0
license](https://creativecommons.org/licenses/by-sa/4.0/).
| alexandrainst/da-wit | [
"task_categories:image-to-text",
"task_categories:zero-shot-image-classification",
"task_categories:feature-extraction",
"task_ids:image-captioning",
"size_categories:100K<n<1M",
"source_datasets:wikimedia/wit_base",
"language:da",
"license:cc-by-sa-4.0",
"region:us"
] | 2022-11-13T22:14:51+00:00 | {"language": ["da"], "license": ["cc-by-sa-4.0"], "size_categories": ["100K<n<1M"], "source_datasets": ["wikimedia/wit_base"], "task_categories": ["image-to-text", "zero-shot-image-classification", "feature-extraction"], "task_ids": ["image-captioning"], "pretty_name": "Danish WIT"} | 2022-11-18T15:48:44+00:00 | [] | [
"da"
] | TAGS
#task_categories-image-to-text #task_categories-zero-shot-image-classification #task_categories-feature-extraction #task_ids-image-captioning #size_categories-100K<n<1M #source_datasets-wikimedia/wit_base #language-Danish #license-cc-by-sa-4.0 #region-us
| Dataset Card for Danish WIT
===========================
Dataset Description
-------------------
* Repository: <URL
* Point of Contact: Dan Saattrup Nielsen
* Size of downloaded dataset files: 7.5 GB
* Size of the generated dataset: 7.8 GB
* Total amount of disk used: 15.3 GB
### Dataset Summary
Google presented the Wikipedia Image Text (WIT) dataset in July
2021, a dataset which contains
scraped images from Wikipedia along with their descriptions. WikiMedia released
WIT-Base in September
2021,
being a modified version of WIT where they have removed the images with empty
"reference descriptions", as well as removing images where a person's face covers more
than 10% of the image surface, along with inappropriate images that are candidate for
deletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of
roughly 160,000 images with associated Danish descriptions. We release the dataset
under the CC BY-SA 4.0 license, in
accordance with WIT-Base's identical
license.
### Supported Tasks and Leaderboards
Training machine learning models for caption generation, zero-shot image classification
and text-image search are the intended tasks for this dataset. No leaderboard is active
at this point.
### Languages
The dataset is available in Danish ('da').
Dataset Structure
-----------------
### Data Instances
* Size of downloaded dataset files: 7.5 GB
* Size of the generated dataset: 7.8 GB
* Total amount of disk used: 15.3 GB
An example from the 'train' split looks as follows.
### Data Fields
The data fields are the same among all splits.
* 'image': an 'Image' feature.
* 'image\_url': a 'str' feature.
* 'embedding': a 'list' feature.
* 'metadata\_url': a 'str' feature.
* 'original\_height': an 'int' or 'NaN' feature.
* 'original\_width': an 'int' or 'NaN' feature.
* 'mime\_type': a 'str' or 'None' feature.
* 'caption\_attribution\_description': a 'str' or 'None' feature.
* 'page\_url': a 'str' feature.
* 'attribution\_passes\_lang\_id': a 'bool' or 'None' feature.
* 'caption\_alt\_text\_description': a 'str' or 'None' feature.
* 'caption\_reference\_description': a 'str' or 'None' feature.
* 'caption\_title\_and\_reference\_description': a 'str' or 'None' feature.
* 'context\_page\_description': a 'str' or 'None' feature.
* 'context\_section\_description': a 'str' or 'None' feature.
* 'hierarchical\_section\_title': a 'str' feature.
* 'is\_main\_image': a 'bool' or 'None' feature.
* 'page\_changed\_recently': a 'bool' or 'None' feature.
* 'page\_title': a 'str' feature.
* 'section\_title': a 'str' or 'None' feature.
### Data Splits
Roughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split
the resulting 168,740 samples into a training set, validation set and testing set of
the following sizes:
Dataset Creation
----------------
### Curation Rationale
It is quite cumbersome to extract the Danish portion of the WIT-Base dataset,
especially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT
is purely to make it easier to work with the Danish portion of it.
### Source Data
The original data was collected from WikiMedia's
WIT-Base dataset, which in turn
comes from Google's WIT dataset.
Additional Information
----------------------
### Dataset Curators
Dan Saattrup Nielsen from the The Alexandra
Institute curated this dataset.
### Licensing Information
The dataset is licensed under the CC BY-SA 4.0
license.
| [
"### Dataset Summary\n\n\nGoogle presented the Wikipedia Image Text (WIT) dataset in July\n2021, a dataset which contains\nscraped images from Wikipedia along with their descriptions. WikiMedia released\nWIT-Base in September\n2021,\nbeing a modified version of WIT where they have removed the images with empty\n\"reference descriptions\", as well as removing images where a person's face covers more\nthan 10% of the image surface, along with inappropriate images that are candidate for\ndeletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of\nroughly 160,000 images with associated Danish descriptions. We release the dataset\nunder the CC BY-SA 4.0 license, in\naccordance with WIT-Base's identical\nlicense.",
"### Supported Tasks and Leaderboards\n\n\nTraining machine learning models for caption generation, zero-shot image classification\nand text-image search are the intended tasks for this dataset. No leaderboard is active\nat this point.",
"### Languages\n\n\nThe dataset is available in Danish ('da').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of downloaded dataset files: 7.5 GB\n* Size of the generated dataset: 7.8 GB\n* Total amount of disk used: 15.3 GB\n\n\nAn example from the 'train' split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'image': an 'Image' feature.\n* 'image\\_url': a 'str' feature.\n* 'embedding': a 'list' feature.\n* 'metadata\\_url': a 'str' feature.\n* 'original\\_height': an 'int' or 'NaN' feature.\n* 'original\\_width': an 'int' or 'NaN' feature.\n* 'mime\\_type': a 'str' or 'None' feature.\n* 'caption\\_attribution\\_description': a 'str' or 'None' feature.\n* 'page\\_url': a 'str' feature.\n* 'attribution\\_passes\\_lang\\_id': a 'bool' or 'None' feature.\n* 'caption\\_alt\\_text\\_description': a 'str' or 'None' feature.\n* 'caption\\_reference\\_description': a 'str' or 'None' feature.\n* 'caption\\_title\\_and\\_reference\\_description': a 'str' or 'None' feature.\n* 'context\\_page\\_description': a 'str' or 'None' feature.\n* 'context\\_section\\_description': a 'str' or 'None' feature.\n* 'hierarchical\\_section\\_title': a 'str' feature.\n* 'is\\_main\\_image': a 'bool' or 'None' feature.\n* 'page\\_changed\\_recently': a 'bool' or 'None' feature.\n* 'page\\_title': a 'str' feature.\n* 'section\\_title': a 'str' or 'None' feature.",
"### Data Splits\n\n\nRoughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split\nthe resulting 168,740 samples into a training set, validation set and testing set of\nthe following sizes:\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nIt is quite cumbersome to extract the Danish portion of the WIT-Base dataset,\nespecially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT\nis purely to make it easier to work with the Danish portion of it.",
"### Source Data\n\n\nThe original data was collected from WikiMedia's\nWIT-Base dataset, which in turn\ncomes from Google's WIT dataset.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDan Saattrup Nielsen from the The Alexandra\nInstitute curated this dataset.",
"### Licensing Information\n\n\nThe dataset is licensed under the CC BY-SA 4.0\nlicense."
] | [
"TAGS\n#task_categories-image-to-text #task_categories-zero-shot-image-classification #task_categories-feature-extraction #task_ids-image-captioning #size_categories-100K<n<1M #source_datasets-wikimedia/wit_base #language-Danish #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Summary\n\n\nGoogle presented the Wikipedia Image Text (WIT) dataset in July\n2021, a dataset which contains\nscraped images from Wikipedia along with their descriptions. WikiMedia released\nWIT-Base in September\n2021,\nbeing a modified version of WIT where they have removed the images with empty\n\"reference descriptions\", as well as removing images where a person's face covers more\nthan 10% of the image surface, along with inappropriate images that are candidate for\ndeletion. This dataset is the Danish portion of the WIT-Base dataset, consisting of\nroughly 160,000 images with associated Danish descriptions. We release the dataset\nunder the CC BY-SA 4.0 license, in\naccordance with WIT-Base's identical\nlicense.",
"### Supported Tasks and Leaderboards\n\n\nTraining machine learning models for caption generation, zero-shot image classification\nand text-image search are the intended tasks for this dataset. No leaderboard is active\nat this point.",
"### Languages\n\n\nThe dataset is available in Danish ('da').\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\n* Size of downloaded dataset files: 7.5 GB\n* Size of the generated dataset: 7.8 GB\n* Total amount of disk used: 15.3 GB\n\n\nAn example from the 'train' split looks as follows.",
"### Data Fields\n\n\nThe data fields are the same among all splits.\n\n\n* 'image': an 'Image' feature.\n* 'image\\_url': a 'str' feature.\n* 'embedding': a 'list' feature.\n* 'metadata\\_url': a 'str' feature.\n* 'original\\_height': an 'int' or 'NaN' feature.\n* 'original\\_width': an 'int' or 'NaN' feature.\n* 'mime\\_type': a 'str' or 'None' feature.\n* 'caption\\_attribution\\_description': a 'str' or 'None' feature.\n* 'page\\_url': a 'str' feature.\n* 'attribution\\_passes\\_lang\\_id': a 'bool' or 'None' feature.\n* 'caption\\_alt\\_text\\_description': a 'str' or 'None' feature.\n* 'caption\\_reference\\_description': a 'str' or 'None' feature.\n* 'caption\\_title\\_and\\_reference\\_description': a 'str' or 'None' feature.\n* 'context\\_page\\_description': a 'str' or 'None' feature.\n* 'context\\_section\\_description': a 'str' or 'None' feature.\n* 'hierarchical\\_section\\_title': a 'str' feature.\n* 'is\\_main\\_image': a 'bool' or 'None' feature.\n* 'page\\_changed\\_recently': a 'bool' or 'None' feature.\n* 'page\\_title': a 'str' feature.\n* 'section\\_title': a 'str' or 'None' feature.",
"### Data Splits\n\n\nRoughly 2.60% of the WIT-Base dataset comes from the Danish Wikipedia. We have split\nthe resulting 168,740 samples into a training set, validation set and testing set of\nthe following sizes:\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nIt is quite cumbersome to extract the Danish portion of the WIT-Base dataset,\nespecially as the dataset takes up 333 GB of disk space, so the curation of Danish-WIT\nis purely to make it easier to work with the Danish portion of it.",
"### Source Data\n\n\nThe original data was collected from WikiMedia's\nWIT-Base dataset, which in turn\ncomes from Google's WIT dataset.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nDan Saattrup Nielsen from the The Alexandra\nInstitute curated this dataset.",
"### Licensing Information\n\n\nThe dataset is licensed under the CC BY-SA 4.0\nlicense."
] |
2190d35937c1ecb7b1f293d45165b2eb4f8dbe1b | # Dataset Card for "code-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NeelNanda/code-tokenized | [
"region:us"
] | 2022-11-14T00:04:10+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 2436318372, "num_examples": 297257}], "download_size": 501062424, "dataset_size": 2436318372}} | 2022-11-14T00:05:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "code-tokenized"
More Information needed | [
"# Dataset Card for \"code-tokenized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"code-tokenized\"\n\nMore Information needed"
] |
b214d7f3b750f4d8051d3c3d2e1f09f01dd251e7 | # Dataset Card for "c4-tokenized-2b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NeelNanda/c4-tokenized-2b | [
"region:us"
] | 2022-11-14T00:15:38+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 11145289620, "num_examples": 1359845}], "download_size": 2530851147, "dataset_size": 11145289620}} | 2022-11-14T00:26:59+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "c4-tokenized-2b"
More Information needed | [
"# Dataset Card for \"c4-tokenized-2b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"c4-tokenized-2b\"\n\nMore Information needed"
] |
9699b134759636ee820acba74887bf165e49f8ef | Flickr30k Images Data | Ziyang/F30k | [
"region:us"
] | 2022-11-14T01:16:39+00:00 | {} | 2022-11-14T01:47:01+00:00 | [] | [] | TAGS
#region-us
| Flickr30k Images Data | [] | [
"TAGS\n#region-us \n"
] |
7f8f2478e374f161fede00a6ea1d7997201fb82c |
# Dataset Card for KsponSpeech
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [AIHub](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=123)
- **Repository:**
- **Paper:** [KsponSpeech](https://www.mdpi.com/2076-3417/10/19/6936)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
Korean
## Dataset Structure
### Data Instances
```json
{
'id': 'KsponSpeech_E00001',
'audio': {'path': None,
'array': array([0.0010376 , 0.00085449, 0.00097656, ..., 0.00250244, 0.0022583 ,
0.00253296]),
'sampling_rate': 16000},
'text': '어 일단은 억지로 과장해서 이렇게 하는 것보다 진실된 마음으로 이걸 어떻게 전달할 수 있을까 공감을 시킬 수 있을까 해서 좀'
}
```
### Data Fields
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
### Data Splits
| | Train | Valid | eval.clean | eval.other |
| ----- | ------ | ----- | ---- | ---- |
| #samples | 620000 | 2545 | 3000 | 3000 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
```bibtex
@Article{app10196936,
AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun},
TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition},
JOURNAL = {Applied Sciences},
VOLUME = {10},
YEAR = {2020},
NUMBER = {19},
ARTICLE-NUMBER = {6936},
URL = {https://www.mdpi.com/2076-3417/10/19/6936},
ISSN = {2076-3417},
ABSTRACT = {This paper introduces a large-scale spontaneous speech corpus of Korean, named KsponSpeech. This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.},
DOI = {10.3390/app10196936}
}
```
| Murple/ksponspeech | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:ko",
"region:us"
] | 2022-11-14T01:58:12+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["ko"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "KsponSpeech", "tags": []} | 2022-11-14T02:41:37+00:00 | [] | [
"ko"
] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Korean #region-us
| Dataset Card for KsponSpeech
============================
Table of Contents
-----------------
* Table of Contents
* Dataset Description
+ Dataset Summary
+ Supported Tasks and Leaderboards
+ Languages
* Dataset Structure
+ Data Instances
+ Data Fields
+ Data Splits
* Dataset Creation
+ Curation Rationale
+ Source Data
+ Annotations
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Dataset Curators
+ Licensing Information
+ Citation Information
+ Contributions
Dataset Description
-------------------
* Homepage: AIHub
* Repository:
* Paper: KsponSpeech
* Leaderboard:
* Point of Contact:
### Dataset Summary
This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.
### Supported Tasks and Leaderboards
### Languages
Korean
Dataset Structure
-----------------
### Data Instances
### Data Fields
* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
* text: the transcription of the audio file.
* id: unique id of the data sample.
### Data Splits
Dataset Creation
----------------
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
Additional Information
----------------------
### Dataset Curators
### Licensing Information
| [
"### Dataset Summary\n\n\nThis corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nKorean\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* id: unique id of the data sample.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information"
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Korean #region-us \n",
"### Dataset Summary\n\n\nThis corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nKorean\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields\n\n\n* audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling\\_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n* text: the transcription of the audio file.\n* id: unique id of the data sample.",
"### Data Splits\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations\n\n\nAdditional Information\n----------------------",
"### Dataset Curators",
"### Licensing Information"
] |
ccd203bc0b7fae5ccb76e768597b50299ae0917a | # Dataset Card for "tokenized-recipe-nlg-gpt2"
This a tokenized version of the recipe-nlg database from https://recipenlg.cs.put.poznan.pl/.
The preprocessing on the original csv was done using the methodology of the original paper (best as I could interpret) along with a similar 0.05 percent train test split. The tokenizer used has some special tokens, but all these parameters are accessible in https://huggingface.co/pratultandon/recipe-nlg-gpt2 if you want to recreate. This dataset will save you a lot of time getting started if you want to experiment with training GPT2 on the data yourself.
| pratultandon/tokenized-recipe-nlg-gpt2 | [
"region:us"
] | 2022-11-14T02:07:58+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "test", "num_bytes": 135944246, "num_examples": 106202}, {"name": "train", "num_bytes": 2582090838, "num_examples": 2022671}], "download_size": 805955428, "dataset_size": 2718035084}} | 2022-11-16T17:14:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tokenized-recipe-nlg-gpt2"
This a tokenized version of the recipe-nlg database from URL
The preprocessing on the original csv was done using the methodology of the original paper (best as I could interpret) along with a similar 0.05 percent train test split. The tokenizer used has some special tokens, but all these parameters are accessible in URL if you want to recreate. This dataset will save you a lot of time getting started if you want to experiment with training GPT2 on the data yourself.
| [
"# Dataset Card for \"tokenized-recipe-nlg-gpt2\"\n\nThis a tokenized version of the recipe-nlg database from URL \nThe preprocessing on the original csv was done using the methodology of the original paper (best as I could interpret) along with a similar 0.05 percent train test split. The tokenizer used has some special tokens, but all these parameters are accessible in URL if you want to recreate. This dataset will save you a lot of time getting started if you want to experiment with training GPT2 on the data yourself."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenized-recipe-nlg-gpt2\"\n\nThis a tokenized version of the recipe-nlg database from URL \nThe preprocessing on the original csv was done using the methodology of the original paper (best as I could interpret) along with a similar 0.05 percent train test split. The tokenizer used has some special tokens, but all these parameters are accessible in URL if you want to recreate. This dataset will save you a lot of time getting started if you want to experiment with training GPT2 on the data yourself."
] |
3d3615f3b90aa9f63635597e8123820dab866749 |
# Dataset Card for MMCRSC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [MAGICDATA Mandarin Chinese Read Speech Corpus](https://openslr.org/68/)
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
MAGICDATA Mandarin Chinese Read Speech Corpus was developed by MAGIC DATA Technology Co., Ltd. and freely published for non-commercial use.
The contents and the corresponding descriptions of the corpus include:
The corpus contains 755 hours of speech data, which is mostly mobile recorded data.
1080 speakers from different accent areas in China are invited to participate in the recording.
The sentence transcription accuracy is higher than 98%.
Recordings are conducted in a quiet indoor environment.
The database is divided into training set, validation set, and testing set in a ratio of 51: 1: 2.
Detail information such as speech data coding and speaker information is preserved in the metadata file.
The domain of recording texts is diversified, including interactive Q&A, music search, SNS messages, home command and control, etc.
Segmented transcripts are also provided.
The corpus aims to support researchers in speech recognition, machine translation, speaker recognition, and other speech-related fields. Therefore, the corpus is totally free for academic use.
The corpus is a subset of a much bigger data ( 10566.9 hours Chinese Mandarin Speech Corpus ) set which was recorded in the same environment. Please feel free to contact us via [email protected] for more details.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
zh-CN
## Dataset Structure
### Data Instances
```json
{
'file': '14_3466_20170826171404.wav',
'audio': {
'path': '14_3466_20170826171404.wav',
'array': array([0., 0., 0., ..., 0., 0., 0.]),
'sampling_rate': 16000
},
'text': '请搜索我附近的超市',
'speaker_id': 143466,
'id': '14_3466_20170826171404.wav'
}
```
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
Please cite the corpus as "Magic Data Technology Co., Ltd., "http://www.imagicdatatech.com/index.php/home/dataopensource/data_info/id/101", 05/2019".
| Murple/mmcrsc | [
"task_categories:automatic-speech-recognition",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"source_datasets:original",
"language:zh",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-11-14T02:25:20+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["crowdsourced"], "language": ["zh"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["automatic-speech-recognition"], "task_ids": [], "pretty_name": "MAGICDATA_Mandarin_Chinese_Read_Speech_Corpus", "tags": []} | 2022-11-14T02:37:54+00:00 | [] | [
"zh"
] | TAGS
#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Chinese #license-cc-by-nc-nd-4.0 #region-us
|
# Dataset Card for MMCRSC
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: MAGICDATA Mandarin Chinese Read Speech Corpus
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
MAGICDATA Mandarin Chinese Read Speech Corpus was developed by MAGIC DATA Technology Co., Ltd. and freely published for non-commercial use.
The contents and the corresponding descriptions of the corpus include:
The corpus contains 755 hours of speech data, which is mostly mobile recorded data.
1080 speakers from different accent areas in China are invited to participate in the recording.
The sentence transcription accuracy is higher than 98%.
Recordings are conducted in a quiet indoor environment.
The database is divided into training set, validation set, and testing set in a ratio of 51: 1: 2.
Detail information such as speech data coding and speaker information is preserved in the metadata file.
The domain of recording texts is diversified, including interactive Q&A, music search, SNS messages, home command and control, etc.
Segmented transcripts are also provided.
The corpus aims to support researchers in speech recognition, machine translation, speaker recognition, and other speech-related fields. Therefore, the corpus is totally free for academic use.
The corpus is a subset of a much bigger data ( 10566.9 hours Chinese Mandarin Speech Corpus ) set which was recorded in the same environment. Please feel free to contact us via business@URL for more details.
### Supported Tasks and Leaderboards
### Languages
zh-CN
## Dataset Structure
### Data Instances
### Data Fields
- file: A path to the downloaded audio file in .wav format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Please cite the corpus as "Magic Data Technology Co., Ltd., "URL 05/2019".
| [
"# Dataset Card for MMCRSC",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: MAGICDATA Mandarin Chinese Read Speech Corpus\n- Repository:\n- Paper: \n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nMAGICDATA Mandarin Chinese Read Speech Corpus was developed by MAGIC DATA Technology Co., Ltd. and freely published for non-commercial use.\nThe contents and the corresponding descriptions of the corpus include:\n\nThe corpus contains 755 hours of speech data, which is mostly mobile recorded data.\n1080 speakers from different accent areas in China are invited to participate in the recording.\nThe sentence transcription accuracy is higher than 98%.\nRecordings are conducted in a quiet indoor environment.\nThe database is divided into training set, validation set, and testing set in a ratio of 51: 1: 2.\nDetail information such as speech data coding and speaker information is preserved in the metadata file.\nThe domain of recording texts is diversified, including interactive Q&A, music search, SNS messages, home command and control, etc.\nSegmented transcripts are also provided.\nThe corpus aims to support researchers in speech recognition, machine translation, speaker recognition, and other speech-related fields. Therefore, the corpus is totally free for academic use.\nThe corpus is a subset of a much bigger data ( 10566.9 hours Chinese Mandarin Speech Corpus ) set which was recorded in the same environment. Please feel free to contact us via business@URL for more details.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nzh-CN",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- file: A path to the downloaded audio file in .wav format.\n- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n- text: the transcription of the audio file.\n- id: unique id of the data sample.\n- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\n\n\n\n\nPlease cite the corpus as \"Magic Data Technology Co., Ltd., \"URL 05/2019\"."
] | [
"TAGS\n#task_categories-automatic-speech-recognition #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Chinese #license-cc-by-nc-nd-4.0 #region-us \n",
"# Dataset Card for MMCRSC",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: MAGICDATA Mandarin Chinese Read Speech Corpus\n- Repository:\n- Paper: \n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary\n\nMAGICDATA Mandarin Chinese Read Speech Corpus was developed by MAGIC DATA Technology Co., Ltd. and freely published for non-commercial use.\nThe contents and the corresponding descriptions of the corpus include:\n\nThe corpus contains 755 hours of speech data, which is mostly mobile recorded data.\n1080 speakers from different accent areas in China are invited to participate in the recording.\nThe sentence transcription accuracy is higher than 98%.\nRecordings are conducted in a quiet indoor environment.\nThe database is divided into training set, validation set, and testing set in a ratio of 51: 1: 2.\nDetail information such as speech data coding and speaker information is preserved in the metadata file.\nThe domain of recording texts is diversified, including interactive Q&A, music search, SNS messages, home command and control, etc.\nSegmented transcripts are also provided.\nThe corpus aims to support researchers in speech recognition, machine translation, speaker recognition, and other speech-related fields. Therefore, the corpus is totally free for academic use.\nThe corpus is a subset of a much bigger data ( 10566.9 hours Chinese Mandarin Speech Corpus ) set which was recorded in the same environment. Please feel free to contact us via business@URL for more details.",
"### Supported Tasks and Leaderboards",
"### Languages\n\nzh-CN",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- file: A path to the downloaded audio file in .wav format.\n- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n- text: the transcription of the audio file.\n- id: unique id of the data sample.\n- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\n\n\n\n\nPlease cite the corpus as \"Magic Data Technology Co., Ltd., \"URL 05/2019\"."
] |
4f651d4d829b90f12d836eed30b08b7619b13b2f | CXR-PRO contains the following files:
```
.
├── cxr.h5
├── mimic_train_impressions.csv
└── mimic_test_impressions.csv
```
The contents of each file are outlined below:
`cxr.h5`: The subset of MIMIC-CXR chest radiographs used for MIMIC-PRO, saved in Hierarchical Data Format (HDF).
`mimic_train_impressions.csv`: A compilation of the impressions section of each radiology report in the MIMIC-PRO dataset, with references to priors removed. Additional fields include `dicom_id`, `study_id`, and `subject_id` (which refer users to the chest radiograph associated with a given impressions section).
`mimic_test_impressions.csv`: The expert-edited test set, as described in the Methods section of MIMIC-PRO's documentation on PhysioNet.
| rajpurkarlab/CXR-PRO | [
"region:us"
] | 2022-11-14T02:41:31+00:00 | {} | 2022-11-14T03:11:17+00:00 | [] | [] | TAGS
#region-us
| CXR-PRO contains the following files:
The contents of each file are outlined below:
'cxr.h5': The subset of MIMIC-CXR chest radiographs used for MIMIC-PRO, saved in Hierarchical Data Format (HDF).
'mimic_train_impressions.csv': A compilation of the impressions section of each radiology report in the MIMIC-PRO dataset, with references to priors removed. Additional fields include 'dicom_id', 'study_id', and 'subject_id' (which refer users to the chest radiograph associated with a given impressions section).
'mimic_test_impressions.csv': The expert-edited test set, as described in the Methods section of MIMIC-PRO's documentation on PhysioNet.
| [] | [
"TAGS\n#region-us \n"
] |
f48d8a3b7207ad00de83c334476d5132bc3fc20d |
Dataset Summary
---
Collection of Romance Novels featuring `title`, `description`, and `genres`. Created with intention of building a "Romance Novel Generator."
Data Fields
---
- `id` : unique integer to id book in the dataset
- `pub_month` : string indicating the month the book was published in the form: `YEAR_MONTH`
- `title` : title of the book
- `author` : comma-separated (`last-name, first-name`) of the author of book
- `isbn13` : 13 digit number for the isbn of book (note not all books will have an isbn number)
- `description` : text description of the book. May contain quoted lines, a brief teaser of the plot, etc...
- `genres` : dictionary of all genres with 0 indicating the book is **NOT** tagged to that genre, and a 1 indicating that the book is tagged to that genre
- additional fields are the all the individual genres exploded with respective 1 & 0 values
Languages
--
- en | diltdicker/romance_books_32K | [
"license:openrail",
"region:us"
] | 2022-11-14T02:58:32+00:00 | {"license": "openrail"} | 2022-11-15T07:37:05+00:00 | [] | [] | TAGS
#license-openrail #region-us
|
Dataset Summary
---
Collection of Romance Novels featuring 'title', 'description', and 'genres'. Created with intention of building a "Romance Novel Generator."
Data Fields
---
- 'id' : unique integer to id book in the dataset
- 'pub_month' : string indicating the month the book was published in the form: 'YEAR_MONTH'
- 'title' : title of the book
- 'author' : comma-separated ('last-name, first-name') of the author of book
- 'isbn13' : 13 digit number for the isbn of book (note not all books will have an isbn number)
- 'description' : text description of the book. May contain quoted lines, a brief teaser of the plot, etc...
- 'genres' : dictionary of all genres with 0 indicating the book is NOT tagged to that genre, and a 1 indicating that the book is tagged to that genre
- additional fields are the all the individual genres exploded with respective 1 & 0 values
Languages
--
- en | [] | [
"TAGS\n#license-openrail #region-us \n"
] |
fe9769ed6f11f9bc4c77f831ffa4e0a83bdd58f3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-0d489a-2053267106 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:02:47+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
d0863108277569f137900db4ab033df5702779eb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-0d489a-2053267104 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:05:03+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
9fb3124dfb11c21551b209dc062d65c24aa83444 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467109 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:37:08+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
bd4e5970e5b323d4d9a72eccfd6c23876597d671 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-0d489a-2053267103 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:49:55+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
7df8023075a155c0ca570cfc73d2d941ea7b206e | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467108 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T17:02:36+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
36a81b77e33fd8c11a8cc6886e05eca940ef319f | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-0d489a-2053267100 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-66b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T17:18:52+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-66b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-66b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
a46fa00bec2d4c1064b9a49339ced0c6186e2d34 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467113 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:21:45+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
83df56962dbbddd26c685468382bedeaf1c1817b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467112 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:04:58+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
97bec3a98ac858235afb67629c0c7f443ea8bf93 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-0d489a-2053267102 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:24:50+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
c02345056ef042291d7f6fefd21c31c02086995d | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467111 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:49:39+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
a7276dac777a4c4f4d390e05923f7f821c8d04d2 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-0d489a-2053267105 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:21:13+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
9f94c13a99e14d2638bc7ca61d87cf67bca87d79 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-0d489a-2053267107 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:41+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:12:11+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
14860646c6f720e72cf55464d189a7555e7e9cec | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-0d489a-2053267101 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:30:55+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
8317593dfeb46e445a5ed266069037156978bfa3 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@futin](https://huggingface.co/futin) for evaluating this model. | autoevaluate/autoeval-eval-futin__guess-vi-4444ed-2051267099 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["futin/guess"], "eval_info": {"task": "text_zero_shot_classification", "model": "facebook/opt-66b", "metrics": [], "dataset_name": "futin/guess", "dataset_config": "vi", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-16T00:32:43+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: facebook/opt-66b
* Dataset: futin/guess
* Config: vi
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @futin for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: facebook/opt-66b\n* Dataset: futin/guess\n* Config: vi\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @futin for evaluating this model."
] |
70a0c0951526431bcb8b47e133e4af5025792d7a | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467110 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T08:59:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:24:09+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
ec4045eaba819c82558871eb939e1c826d3f8d7b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Natural Language Inference
* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
* Dataset: anli
* Config: plain_text
* Split: dev_r1
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@ctkang](https://huggingface.co/ctkang) for evaluating this model. | autoevaluate/autoeval-eval-anli-plain_text-f2dca1-2066067125 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:06:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["anli"], "eval_info": {"task": "natural_language_inference", "model": "MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli", "metrics": [], "dataset_name": "anli", "dataset_config": "plain_text", "dataset_split": "dev_r1", "col_mapping": {"text1": "premise", "text2": "hypothesis", "target": "label"}}} | 2022-11-14T09:07:21+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Natural Language Inference
* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
* Dataset: anli
* Config: plain_text
* Split: dev_r1
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @ctkang for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: dev_r1\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ctkang for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Natural Language Inference\n* Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli\n* Dataset: anli\n* Config: plain_text\n* Split: dev_r1\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @ctkang for evaluating this model."
] |
c6e40c10c1ca965f3d0c0dd76d1be9acddf6ad3b | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Translation
* Model: facebook/wmt19-en-de
* Dataset: wmt19
* Config: de-en
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@WillHeld](https://huggingface.co/WillHeld) for evaluating this model. | autoevaluate/autoeval-eval-wmt19-de-en-9eb893-2069467127 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:06:47+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["wmt19"], "eval_info": {"task": "translation", "model": "facebook/wmt19-en-de", "metrics": [], "dataset_name": "wmt19", "dataset_config": "de-en", "dataset_split": "validation", "col_mapping": {"source": "translation.en", "target": "translation.de"}}} | 2022-11-14T09:09:53+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Translation
* Model: facebook/wmt19-en-de
* Dataset: wmt19
* Config: de-en
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @WillHeld for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: facebook/wmt19-en-de\n* Dataset: wmt19\n* Config: de-en\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Translation\n* Model: facebook/wmt19-en-de\n* Dataset: wmt19\n* Config: de-en\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @WillHeld for evaluating this model."
] |
0c27b993cc627c9d2f2d6162fdee1785daae38f4 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467114 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:11:19+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-125m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:14:27+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-125m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-125m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
8a64f02528a560bc0739c2f6955d6b5ccadd4111 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@lewtun](https://huggingface.co/lewtun) for evaluating this model. | autoevaluate/autoeval-staging-eval-project-9aae5b6e-ef52-4647-8803-adc504c910ae-1210 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:11:59+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["glue"], "eval_info": {"task": "binary_classification", "model": "autoevaluate/binary-classification", "metrics": ["matthews_correlation"], "dataset_name": "glue", "dataset_config": "sst2", "dataset_split": "validation", "col_mapping": {"text": "sentence", "target": "label"}}} | 2022-11-14T09:12:41+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Binary Text Classification
* Model: autoevaluate/binary-classification
* Dataset: glue
* Config: sst2
* Split: validation
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @lewtun for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Binary Text Classification\n* Model: autoevaluate/binary-classification\n* Dataset: glue\n* Config: sst2\n* Split: validation\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @lewtun for evaluating this model."
] |
73ccdba1ed1739991be73cb86e7477b4037e85eb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-2bec9f-2053467115 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:12:42+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-1.3b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:25:18+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-1.3b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-1.3b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
2581ffc58670a1b678a4a4b4a59ec2247b0d2411 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667117 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:20:37+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-30b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T12:55:26+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-30b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-30b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
b7d5e885c343176c275fbd824e3f31c110a98949 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667118 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:21:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-13b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:48:28+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-13b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-13b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
16def52d7b0ee278a717445bc14195d966ff5eeb | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Summarization
* Model: it5/mt5-base-news-summarization
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mtharrison](https://huggingface.co/mtharrison) for evaluating this model. | autoevaluate/autoeval-eval-cnn_dailymail-3.0.0-92e227-2073967129 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:24:50+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["cnn_dailymail"], "eval_info": {"task": "summarization", "model": "it5/mt5-base-news-summarization", "metrics": [], "dataset_name": "cnn_dailymail", "dataset_config": "3.0.0", "dataset_split": "test", "col_mapping": {"text": "article", "target": "highlights"}}} | 2022-11-14T09:38:24+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Summarization
* Model: it5/mt5-base-news-summarization
* Dataset: cnn_dailymail
* Config: 3.0.0
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mtharrison for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: it5/mt5-base-news-summarization\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mtharrison for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Summarization\n* Model: it5/mt5-base-news-summarization\n* Dataset: cnn_dailymail\n* Config: 3.0.0\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mtharrison for evaluating this model."
] |
880c276e4e56abf43cfbd2249719dc1f6b369ce7 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667120 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:28:40+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-350m_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:33:51+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-350m_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-350m_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
317fa524ae5f042b78697f782f2ae18b9a1f4274 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667119 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:28:43+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-6.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T10:17:59+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-6.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-6.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
9b8d639734a86e5f7a1b4f29230db77f29ca8328 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@mathemakitten](https://huggingface.co/mathemakitten) for evaluating this model. | autoevaluate/autoeval-eval-mathemakitten__winobias_antistereotype_test_v5-mathemak-b6a817-2053667121 | [
"autotrain",
"evaluation",
"region:us"
] | 2022-11-14T09:32:21+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["mathemakitten/winobias_antistereotype_test_v5"], "eval_info": {"task": "text_zero_shot_classification", "model": "inverse-scaling/opt-2.7b_eval", "metrics": [], "dataset_name": "mathemakitten/winobias_antistereotype_test_v5", "dataset_config": "mathemakitten--winobias_antistereotype_test_v5", "dataset_split": "test", "col_mapping": {"text": "text", "classes": "classes", "target": "target"}}} | 2022-11-14T09:54:08+00:00 | [] | [] | TAGS
#autotrain #evaluation #region-us
| # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by AutoTrain for the following task and dataset:
* Task: Zero-Shot Text Classification
* Model: inverse-scaling/opt-2.7b_eval
* Dataset: mathemakitten/winobias_antistereotype_test_v5
* Config: mathemakitten--winobias_antistereotype_test_v5
* Split: test
To run new evaluation jobs, visit Hugging Face's automatic model evaluator.
## Contributions
Thanks to @mathemakitten for evaluating this model. | [
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] | [
"TAGS\n#autotrain #evaluation #region-us \n",
"# Dataset Card for AutoTrain Evaluator\n\nThis repository contains model predictions generated by AutoTrain for the following task and dataset:\n\n* Task: Zero-Shot Text Classification\n* Model: inverse-scaling/opt-2.7b_eval\n* Dataset: mathemakitten/winobias_antistereotype_test_v5\n* Config: mathemakitten--winobias_antistereotype_test_v5\n* Split: test\n\nTo run new evaluation jobs, visit Hugging Face's automatic model evaluator.",
"## Contributions\n\nThanks to @mathemakitten for evaluating this model."
] |
4cf21508c3fdc2d141b04dcd729fd72e1b307e6e | # Dataset Card for "MiniScans"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Nadav/MiniScans | [
"region:us"
] | 2022-11-14T09:34:46+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "evaluation", "1": "train"}}}}], "splits": [{"name": "test", "num_bytes": 1655444336.229, "num_examples": 15159}, {"name": "train", "num_bytes": 34770710847.12, "num_examples": 300780}], "download_size": 38233031644, "dataset_size": 36426155183.349}} | 2022-11-15T14:15:58+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "MiniScans"
More Information needed | [
"# Dataset Card for \"MiniScans\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"MiniScans\"\n\nMore Information needed"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.