sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
6b1bea67758e47cf0ffedd25a97d0004941decca | # Dataset Card for "SGD_Events"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Events | [
"region:us"
] | 2022-11-11T21:16:20+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 4745618.287889816, "num_examples": 17860}, {"name": "test", "num_bytes": 248, "num_examples": 1}], "download_size": 1966143, "dataset_size": 4745866.287889816}} | 2023-03-21T20:52:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Events"
More Information needed | [
"# Dataset Card for \"SGD_Events\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Events\"\n\nMore Information needed"
] |
6e99d903784045fa38d781db7d3151ecc5cef621 | # Dataset Card for "SGD_Music"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Music | [
"region:us"
] | 2022-11-11T21:16:34+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1755419.6172071476, "num_examples": 7554}, {"name": "test", "num_bytes": 193, "num_examples": 1}], "download_size": 694555, "dataset_size": 1755612.6172071476}} | 2023-03-21T20:52:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Music"
More Information needed | [
"# Dataset Card for \"SGD_Music\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Music\"\n\nMore Information needed"
] |
ca113ae8c6fee3244146c4018aff3a5f473e6a3f | # Dataset Card for "SGD_Movies"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Movies | [
"region:us"
] | 2022-11-11T21:16:47+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1808099.5360110803, "num_examples": 7219}, {"name": "test", "num_bytes": 297, "num_examples": 1}], "download_size": 729887, "dataset_size": 1808396.5360110803}} | 2023-03-21T20:53:02+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Movies"
More Information needed | [
"# Dataset Card for \"SGD_Movies\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Movies\"\n\nMore Information needed"
] |
971d8e55d9bf7b49bc26bf2c1d63c55d5fa391ef | # Dataset Card for "SGD_Flights"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Flights | [
"region:us"
] | 2022-11-11T21:17:00+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6377556.63733501, "num_examples": 20682}, {"name": "test", "num_bytes": 282, "num_examples": 1}], "download_size": 2501341, "dataset_size": 6377838.63733501}} | 2023-03-21T20:53:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Flights"
More Information needed | [
"# Dataset Card for \"SGD_Flights\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Flights\"\n\nMore Information needed"
] |
c91b9effc8db0cbd56283ff7a4d3b6fc82c2ffd0 | # Dataset Card for "SGD_RideSharing"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_RideSharing | [
"region:us"
] | 2022-11-11T21:17:14+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 658561.1466613673, "num_examples": 2515}, {"name": "test", "num_bytes": 188, "num_examples": 1}], "download_size": 242358, "dataset_size": 658749.1466613673}} | 2023-03-21T20:53:54+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_RideSharing"
More Information needed | [
"# Dataset Card for \"SGD_RideSharing\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_RideSharing\"\n\nMore Information needed"
] |
7e433272415d1cb8830b616792fe04adf63c1c40 | # Dataset Card for "SGD_RentalCars"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_RentalCars | [
"region:us"
] | 2022-11-11T21:17:26+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1685534.5292607802, "num_examples": 5843}, {"name": "test", "num_bytes": 239, "num_examples": 1}], "download_size": 637179, "dataset_size": 1685773.5292607802}} | 2023-03-21T20:54:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_RentalCars"
More Information needed | [
"# Dataset Card for \"SGD_RentalCars\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_RentalCars\"\n\nMore Information needed"
] |
8cc33c27527cded7cc876ed06bc79e3d05196393 | # Dataset Card for "SGD_Buses"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Buses | [
"region:us"
] | 2022-11-11T21:17:39+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2009266.9424069906, "num_examples": 7552}, {"name": "test", "num_bytes": 356, "num_examples": 1}], "download_size": 769749, "dataset_size": 2009622.9424069906}} | 2023-03-21T20:54:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Buses"
More Information needed | [
"# Dataset Card for \"SGD_Buses\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Buses\"\n\nMore Information needed"
] |
d305b4eb353ffcd37a874e67488483fff50a4bd4 | # Dataset Card for "SGD_Hotels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Hotels | [
"region:us"
] | 2022-11-11T21:17:52+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3552843.2265793467, "num_examples": 12520}, {"name": "test", "num_bytes": 439, "num_examples": 1}], "download_size": 1494564, "dataset_size": 3553282.2265793467}} | 2023-03-21T20:55:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Hotels"
More Information needed | [
"# Dataset Card for \"SGD_Hotels\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Hotels\"\n\nMore Information needed"
] |
caa9c20d19b957e45c5c37b2bb9ce26ff9c9eda0 | # Dataset Card for "SGD_Services"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Services | [
"region:us"
] | 2022-11-11T21:18:05+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3478972.4778884, "num_examples": 12956}, {"name": "test", "num_bytes": 88, "num_examples": 1}], "download_size": 1443168, "dataset_size": 3479060.4778884}} | 2023-03-21T20:55:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Services"
More Information needed | [
"# Dataset Card for \"SGD_Services\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Services\"\n\nMore Information needed"
] |
d556035534bc479980d168dfdb0964c3b5419fb3 | # Dataset Card for "SGD_Homes"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Homes | [
"region:us"
] | 2022-11-11T21:18:19+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2242529.6826529265, "num_examples": 7568}, {"name": "test", "num_bytes": 309, "num_examples": 1}], "download_size": 883348, "dataset_size": 2242838.6826529265}} | 2023-03-21T20:56:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Homes"
More Information needed | [
"# Dataset Card for \"SGD_Homes\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Homes\"\n\nMore Information needed"
] |
20c80c4e0b2f14f397e1bc0ee8947d40ababd2c4 | # Dataset Card for "SGD_Banks"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Banks | [
"region:us"
] | 2022-11-11T21:18:32+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1001316.4280845262, "num_examples": 4400}, {"name": "test", "num_bytes": 262, "num_examples": 1}], "download_size": 339188, "dataset_size": 1001578.4280845262}} | 2023-03-21T20:56:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Banks"
More Information needed | [
"# Dataset Card for \"SGD_Banks\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Banks\"\n\nMore Information needed"
] |
4e0005a52c1b76e0d3f6bf9837bf9dfa48cb48d8 | # Dataset Card for "SGD_Calendar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vidhikatkoria/SGD_Calendar | [
"region:us"
] | 2022-11-11T21:18:44+00:00 | {"dataset_info": {"features": [{"name": "domain", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "act", "dtype": "int64"}, {"name": "speaker", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 647408.8420239475, "num_examples": 2588}, {"name": "test", "num_bytes": 352, "num_examples": 1}], "download_size": 235037, "dataset_size": 647760.8420239475}} | 2023-03-21T20:56:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "SGD_Calendar"
More Information needed | [
"# Dataset Card for \"SGD_Calendar\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"SGD_Calendar\"\n\nMore Information needed"
] |
b15129889c9667380958dad75185c1d22d46b262 |
This sentiment dataset was used in the paper: John Blitzer, Mark Dredze, Fernando Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. Association of Computational Linguistics (ACL), 2007.
The author asks, if you use this data for your research or a publication, to cite the above paper as the reference for the data, and to inform him about the reuse.
The Multi-Domain Sentiment Dataset contains product reviews taken from Amazon.com from 4 product types (domains): Kitchen, Books, DVDs, and Electronics. Each domain has several thousand reviews, but the exact number varies by domain. Reviews contain star ratings (1 to 5 stars) that can be converted into binary labels if needed.
The directory contains 3 files called positive.review, negative.review and unlabeled.review. While the positive and negative files contain positive and negative reviews, these aren't necessarily the splits the authors used in the experiments. They randomly drew from the three files ignoring the file names. Each file contains a pseudo XML scheme for encoding the reviews. Most of the fields are self explanatory. The reviews have a "unique ID" field that isn't very unique. If it has two unique id fields, ignore the one containing only a number. | katossky/multi-domain-sentiment | [
"license:unknown",
"region:us"
] | 2022-11-11T21:30:46+00:00 | {"license": "unknown"} | 2022-11-11T21:45:41+00:00 | [] | [] | TAGS
#license-unknown #region-us
|
This sentiment dataset was used in the paper: John Blitzer, Mark Dredze, Fernando Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. Association of Computational Linguistics (ACL), 2007.
The author asks, if you use this data for your research or a publication, to cite the above paper as the reference for the data, and to inform him about the reuse.
The Multi-Domain Sentiment Dataset contains product reviews taken from URL from 4 product types (domains): Kitchen, Books, DVDs, and Electronics. Each domain has several thousand reviews, but the exact number varies by domain. Reviews contain star ratings (1 to 5 stars) that can be converted into binary labels if needed.
The directory contains 3 files called URL, URL and URL. While the positive and negative files contain positive and negative reviews, these aren't necessarily the splits the authors used in the experiments. They randomly drew from the three files ignoring the file names. Each file contains a pseudo XML scheme for encoding the reviews. Most of the fields are self explanatory. The reviews have a "unique ID" field that isn't very unique. If it has two unique id fields, ignore the one containing only a number. | [] | [
"TAGS\n#license-unknown #region-us \n"
] |
a19e2b88393fd2ce86b61f3f74387a6aa4737cf1 | # Dataset Card for "lolita-dress-ENG"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zhangxinran/lolita-dress-ENG | [
"region:us"
] | 2022-11-12T00:24:35+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 533036535.0, "num_examples": 744}], "download_size": 530749245, "dataset_size": 533036535.0}} | 2022-11-12T00:43:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lolita-dress-ENG"
More Information needed | [
"# Dataset Card for \"lolita-dress-ENG\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lolita-dress-ENG\"\n\nMore Information needed"
] |
bc28c1a88a57331f0cf190a777a5234a25b976bd | # Dataset Card for "stereoset_zero"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | WillHeld/stereoset_zero | [
"region:us"
] | 2022-11-12T00:49:43+00:00 | {"dataset_info": {"features": [{"name": "target", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "classes", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 900372, "num_examples": 4229}], "download_size": 311873, "dataset_size": 900372}} | 2022-11-12T00:57:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "stereoset_zero"
More Information needed | [
"# Dataset Card for \"stereoset_zero\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"stereoset_zero\"\n\nMore Information needed"
] |
b440ccc9dfede07d020206455bb41c6df42c9f53 | # Dataset Card for "dalio-reward-model-hackathon-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Jellywibble/dalio-reward-model-hackathon-dataset | [
"region:us"
] | 2022-11-12T04:06:26+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 8765, "num_examples": 16}], "download_size": 6055, "dataset_size": 8765}} | 2022-11-13T17:25:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dalio-reward-model-hackathon-dataset"
More Information needed | [
"# Dataset Card for \"dalio-reward-model-hackathon-dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dalio-reward-model-hackathon-dataset\"\n\nMore Information needed"
] |
37bbc9985d018c7ee582a01492c587165a043083 | # Dataset Card for "rick-and-morty-manual-captions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juliaturc/rick-and-morty-manual-captions | [
"region:us"
] | 2022-11-12T04:50:29+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11036008.0, "num_examples": 151}, {"name": "valid", "num_bytes": 925318.0, "num_examples": 16}], "download_size": 11931563, "dataset_size": 11961326.0}} | 2022-11-12T04:50:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "rick-and-morty-manual-captions"
More Information needed | [
"# Dataset Card for \"rick-and-morty-manual-captions\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"rick-and-morty-manual-captions\"\n\nMore Information needed"
] |
32cffc58163df4f5838a6a9635d762fde83cff9e | # Dataset Card for "dalio-conversations-hackathon-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Jellywibble/dalio-conversations-hackathon-dataset | [
"region:us"
] | 2022-11-12T05:47:33+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "scores", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 5026, "num_examples": 8}], "download_size": 8422, "dataset_size": 5026}} | 2022-11-12T23:35:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dalio-conversations-hackathon-dataset"
More Information needed | [
"# Dataset Card for \"dalio-conversations-hackathon-dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dalio-conversations-hackathon-dataset\"\n\nMore Information needed"
] |
7a83f4c3a031f16305afff1db7e00a545a2aac9a | The training and validation files of the conceptual captions dataset (4M). | Ziyang/CC4M | [
"region:us"
] | 2022-11-12T06:25:29+00:00 | {} | 2022-11-12T06:33:23+00:00 | [] | [] | TAGS
#region-us
| The training and validation files of the conceptual captions dataset (4M). | [] | [
"TAGS\n#region-us \n"
] |
c6e9b33aa26007ae7e6430a8e5ee4d112882b719 | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
| bgstud/libri-mini-proc-whisper | [
"region:us"
] | 2022-11-12T10:35:21+00:00 | {} | 2022-11-12T10:53:24+00:00 | [] | [] | TAGS
#region-us
| ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
paperswithcode_id: acronym-identification
pretty_name: Acronym Identification Dataset
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- token-classification-other-acronym-identification
train-eval-index:
- col_mapping:
labels: tags
tokens: tokens
config: default
splits:
eval_split: test
task: token-classification
task_id: entity_extraction---
# Dataset Card for [Dataset Name]
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
Thanks to @github-username for adding this dataset.
| [
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] | [
"TAGS\n#region-us \n",
"# Dataset Card for [Dataset Name]",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:",
"### Dataset Summary",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions\n\nThanks to @github-username for adding this dataset."
] |
687bce9dce4cba881f89090a759197860ccb3065 | # AutoTrain Dataset for project: compliance
## Dataset Description
This dataset has been automatically processed by AutoTrain for project compliance.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Welcome back Abhishek! What can I do to help? ",
"target": 0
},
{
"text": "Hi , I am calling from ABC finance. I would like to inform you that you are eligible for a Personal Loan",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['Negative', 'Positive'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 31 |
| valid | 9 |
| Akshata/autotrain-data-compliance | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-11-12T11:45:20+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-11-14T09:06:58+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #region-us
| AutoTrain Dataset for project: compliance
=========================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project compliance.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
b6f786ecd95e0ba3e9c63a6a0704a47faa125a95 | # AutoTrain Dataset for project: demo_compliance
## Dataset Description
This dataset has been automatically processed by AutoTrain for project demo_compliance.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Welcome back Abhishek! What can I do to help? ",
"target": 0
},
{
"text": "Hi , I am calling from ABC finance. I would like to inform you that you are eligible for a Personal Loan",
"target": 0
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['Negative', 'Positive'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 31 |
| valid | 9 |
| Akshata/autotrain-data-demo_compliance | [
"task_categories:text-classification",
"language:en",
"region:us"
] | 2022-11-12T12:50:17+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-11-14T09:08:09+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #region-us
| AutoTrain Dataset for project: demo\_compliance
===============================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project demo\_compliance.
### Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| [
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] | [
"TAGS\n#task_categories-text-classification #language-English #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
546dd4d00c07e90b58bc5e60139276685ae2a381 |
# Dataset Card for Leipzig Corpora Swiss German
## Dataset Description
- **Homepage:** https://wortschatz.uni-leipzig.de/en/download/Swiss%20German
- **Repository:** https://huggingface.co/datasets/statworx/leipzip-swiss
### Dataset Summary
Swiss German Wikipedia corpus based on material from 2021.
The corpus gsw_wikipedia_2021 is a Swiss German Wikipedia corpus based on material from 2021. It contains 232,933 sentences and 3,824,547 tokens.
### Languages
Swiss-German
## Dataset Structure
### Data Instances
Single sentences.
### Data Fields
`sentence`: Text as string.
### Data Splits
[More Information Needed]
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://corpora.uni-leipzig.de/en?corpusId=gsw_wikipedia_2021
## Additional Information
### Licensing Information
Creative-Commons-Lizenz CC BY-NC
### Citation Information
Leipzig Corpora Collection: Swiss German Wikipedia corpus based on material from 2021. Leipzig Corpora Collection. Dataset. https://corpora.uni-leipzig.de?corpusId=gsw_wikipedia_2021
| statworx/leipzip-swiss | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:ch",
"license:cc",
"wikipedia",
"region:us"
] | 2022-11-12T15:02:01+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": ["ch"], "license": ["cc"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Leipzig Corpora Swiss German", "tags": ["wikipedia"]} | 2022-11-21T16:19:02+00:00 | [] | [
"ch"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-Chamorro #license-cc #wikipedia #region-us
|
# Dataset Card for Leipzig Corpora Swiss German
## Dataset Description
- Homepage: URL
- Repository: URL
### Dataset Summary
Swiss German Wikipedia corpus based on material from 2021.
The corpus gsw_wikipedia_2021 is a Swiss German Wikipedia corpus based on material from 2021. It contains 232,933 sentences and 3,824,547 tokens.
### Languages
Swiss-German
## Dataset Structure
### Data Instances
Single sentences.
### Data Fields
'sentence': Text as string.
### Data Splits
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
URL
## Additional Information
### Licensing Information
Creative-Commons-Lizenz CC BY-NC
Leipzig Corpora Collection: Swiss German Wikipedia corpus based on material from 2021. Leipzig Corpora Collection. Dataset. URL?corpusId=gsw_wikipedia_2021
| [
"# Dataset Card for Leipzig Corpora Swiss German",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL",
"### Dataset Summary\n\nSwiss German Wikipedia corpus based on material from 2021. \nThe corpus gsw_wikipedia_2021 is a Swiss German Wikipedia corpus based on material from 2021. It contains 232,933 sentences and 3,824,547 tokens.",
"### Languages\n\nSwiss-German",
"## Dataset Structure",
"### Data Instances\n\nSingle sentences.",
"### Data Fields\n\n'sentence': Text as string.",
"### Data Splits",
"## Dataset Creation",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nURL",
"## Additional Information",
"### Licensing Information\n\nCreative-Commons-Lizenz CC BY-NC\n\n\n\nLeipzig Corpora Collection: Swiss German Wikipedia corpus based on material from 2021. Leipzig Corpora Collection. Dataset. URL?corpusId=gsw_wikipedia_2021"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #language-Chamorro #license-cc #wikipedia #region-us \n",
"# Dataset Card for Leipzig Corpora Swiss German",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL",
"### Dataset Summary\n\nSwiss German Wikipedia corpus based on material from 2021. \nThe corpus gsw_wikipedia_2021 is a Swiss German Wikipedia corpus based on material from 2021. It contains 232,933 sentences and 3,824,547 tokens.",
"### Languages\n\nSwiss-German",
"## Dataset Structure",
"### Data Instances\n\nSingle sentences.",
"### Data Fields\n\n'sentence': Text as string.",
"### Data Splits",
"## Dataset Creation",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nURL",
"## Additional Information",
"### Licensing Information\n\nCreative-Commons-Lizenz CC BY-NC\n\n\n\nLeipzig Corpora Collection: Swiss German Wikipedia corpus based on material from 2021. Leipzig Corpora Collection. Dataset. URL?corpusId=gsw_wikipedia_2021"
] |
b7cf0463799a5bbe006564af9222dce346da0303 | # Dataset Card for "yair_gal_small_resized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | galman33/gal_yair_8300_100x100 | [
"region:us"
] | 2022-11-12T15:26:13+00:00 | {"dataset_info": {"features": [{"name": "lat", "dtype": "float64"}, {"name": "lon", "dtype": "float64"}, {"name": "country_code", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 142004157.0, "num_examples": 8300}], "download_size": 141994031, "dataset_size": 142004157.0}} | 2022-11-19T22:41:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "yair_gal_small_resized"
More Information needed | [
"# Dataset Card for \"yair_gal_small_resized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"yair_gal_small_resized\"\n\nMore Information needed"
] |
c031dc07e5bfc318508c2b968374d6ecf76928e2 | # Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ysjay/processed_bert_dataset | [
"region:us"
] | 2022-11-12T16:02:45+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "next_sentence_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 70985500, "num_examples": 2000}], "download_size": 18506503, "dataset_size": 70985500}} | 2022-11-12T16:02:58+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "processed_bert_dataset"
More Information needed | [
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] |
585e8b0fa33c72f11cd8d9fb387df098891bd03e | # Dataset Card for "nlplegal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vegeta/nlplegal | [
"region:us"
] | 2022-11-12T16:35:04+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15816253477, "num_examples": 218374246}, {"name": "validation", "num_bytes": 1736194279, "num_examples": 23880923}], "download_size": 8455493030, "dataset_size": 17552447756}} | 2022-11-12T17:32:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "nlplegal"
More Information needed | [
"# Dataset Card for \"nlplegal\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"nlplegal\"\n\nMore Information needed"
] |
d9d90314ea75bf0df5012a84f5cbe39b25c8fa1c | # Dataset Card for "gal_yair_8300_256x256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | galman33/gal_yair_8300_256x256 | [
"region:us"
] | 2022-11-12T21:05:44+00:00 | {"dataset_info": {"features": [{"name": "lat", "dtype": "float64"}, {"name": "lon", "dtype": "float64"}, {"name": "country_code", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 805012745.0, "num_examples": 8300}], "download_size": 805035741, "dataset_size": 805012745.0}} | 2022-11-12T21:23:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "gal_yair_8300_256x256"
More Information needed | [
"# Dataset Card for \"gal_yair_8300_256x256\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"gal_yair_8300_256x256\"\n\nMore Information needed"
] |
e3366d7cda004d99644e589649dfd973d044c419 | # Dataset Card for "tokenedlegal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vegeta/tokenedlegal | [
"region:us"
] | 2022-11-12T22:58:08+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 29279261498, "num_examples": 218374246}, {"name": "validation", "num_bytes": 3195898734, "num_examples": 23880923}], "download_size": 8182611602, "dataset_size": 32475160232}} | 2022-11-12T23:42:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tokenedlegal"
More Information needed | [
"# Dataset Card for \"tokenedlegal\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tokenedlegal\"\n\nMore Information needed"
] |
eea13305d168a7e87f58feec8e4928a361b44cf4 |
# Cute Style Embedding / Textual Inversion
## Usage
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
This Style doesnt really has a specific theme, it just turns the expression of girls into "cute"
To use it in a prompt: ```"drawn by cute_style"```
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
## Example Pictures
<table>
<tr>
<td><img src=https://i.imgur.com/vDjSy5c.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/wXBNJNX.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/e3gremJ.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/jpYyj96.png width=100% height=100%/></td>
<td><img src=https://i.imgur.com/hUVuj9N.png width=100% height=100%/></td>
</tr>
</table>
## License
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) | Nerfgun3/cute_style | [
"language:en",
"license:creativeml-openrail-m",
"stable-diffusion",
"text-to-image",
"region:us"
] | 2022-11-12T23:23:55+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "tags": ["stable-diffusion", "text-to-image"], "inference": false} | 2022-11-17T13:57:35+00:00 | [] | [
"en"
] | TAGS
#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us
| Cute Style Embedding / Textual Inversion
========================================
Usage
-----
To use this embedding you have to download the file aswell as drop it into the "\stable-diffusion-webui\embeddings" folder
This Style doesnt really has a specific theme, it just turns the expression of girls into "cute"
To use it in a prompt:
If it is to strong just add [] around it.
Trained until 6000 steps
Have fun :)
Example Pictures
----------------
License
-------
This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license here
| [] | [
"TAGS\n#language-English #license-creativeml-openrail-m #stable-diffusion #text-to-image #region-us \n"
] |
cf946d9f16c590f30b86b50c7efee600295fb6c5 |
# Dataset Card for Digimon BLIP captions
This project was inspired by the [labelled Pokemon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions).
The captions were generated using the BLIP Model found in the [LAVIS Library for Language-Vision Intelligence](https://github.com/salesforce/LAVIS).
Like the Pokemon equivalent, each row in the dataset contains the `image` and `text` keys. `Image` is a varying size pixel jpeg, and `text` is the corresponding text caption.
## Citation
If you use this dataset, please cite it as:
```
@misc{clemen2022digimon,
author = {Kok, Clemen},
title = {Digimon BLIP captions},
year={2022},
howpublished= {\url{https://huggingface.co/datasets/ClemenKok/digimon-lavis-captions/}}
}
``` | ClemenKok/digimon-blip-captions | [
"annotations_creators:machine-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:en",
"license:cc-by-nc-4.0",
"digimon",
"region:us"
] | 2022-11-13T00:27:54+00:00 | {"annotations_creators": ["machine-generated"], "language": ["en"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": [], "task_ids": [], "pretty_name": "1,071 BLIP captioned images of Digimon. ", "tags": ["digimon"]} | 2022-11-13T02:08:54+00:00 | [] | [
"en"
] | TAGS
#annotations_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-4.0 #digimon #region-us
|
# Dataset Card for Digimon BLIP captions
This project was inspired by the labelled Pokemon dataset.
The captions were generated using the BLIP Model found in the LAVIS Library for Language-Vision Intelligence.
Like the Pokemon equivalent, each row in the dataset contains the 'image' and 'text' keys. 'Image' is a varying size pixel jpeg, and 'text' is the corresponding text caption.
If you use this dataset, please cite it as:
| [
"# Dataset Card for Digimon BLIP captions\n\nThis project was inspired by the labelled Pokemon dataset.\n\nThe captions were generated using the BLIP Model found in the LAVIS Library for Language-Vision Intelligence. \n\nLike the Pokemon equivalent, each row in the dataset contains the 'image' and 'text' keys. 'Image' is a varying size pixel jpeg, and 'text' is the corresponding text caption.\n\nIf you use this dataset, please cite it as:"
] | [
"TAGS\n#annotations_creators-machine-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-English #license-cc-by-nc-4.0 #digimon #region-us \n",
"# Dataset Card for Digimon BLIP captions\n\nThis project was inspired by the labelled Pokemon dataset.\n\nThe captions were generated using the BLIP Model found in the LAVIS Library for Language-Vision Intelligence. \n\nLike the Pokemon equivalent, each row in the dataset contains the 'image' and 'text' keys. 'Image' is a varying size pixel jpeg, and 'text' is the corresponding text caption.\n\nIf you use this dataset, please cite it as:"
] |
f29989f0e722d8fbd874fe6fee8576e7446f13c7 | # Dataset Card for "lolita-dress-ENG256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | zhangxinran/lolita-dress-ENG256 | [
"region:us"
] | 2022-11-13T00:55:57+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82410459.0, "num_examples": 745}], "download_size": 81543982, "dataset_size": 82410459.0}} | 2022-11-13T00:56:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "lolita-dress-ENG256"
More Information needed | [
"# Dataset Card for \"lolita-dress-ENG256\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"lolita-dress-ENG256\"\n\nMore Information needed"
] |
27a66f55dd29709710c2f2eb415b192f50639526 |
# Dataset Card for VoxCeleb
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
### Dataset Summary
VoxCeleb is an audio-visual dataset consisting of short clips of human speech, extracted from interview videos uploaded to YouTube.
NOTE: Although this dataset can be automatically downloaded, you must manually request credentials to access it from the creators' website.
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
Each datapoint has a path to the audio/video clip along with metadata about the speaker.
```
{
'file': '/datasets/downloads/extracted/[hash]/wav/id10271/_YimahVgI1A/00003.wav',
'file_format': 'wav',
'dataset_id': 'vox1',
'speaker_id': 'id10271',
'speaker_gender': 'm',
'speaker_name': 'Ed_Westwick',
'speaker_nationality': 'UK',
'video_id': '_YimahVgI1A',
'clip_id': '00003',
'audio': {
'path': '/datasets/downloads/extracted/[hash]/wav/id10271/_YimahVgI1A/00003.wav',
'array': array([...], dtype=float32),
'sampling_rate': 16000
}
}
```
### Data Fields
Each row includes the following fields:
- `file`: The path to the audio/video clip
- `file_format`: The file format in which the clip is stored (e.g. `wav`, `aac`, `mp4`)
- `dataset_id`: The ID of the dataset this clip is from (`vox1`, `vox2`)
- `speaker_id`: The ID of the speaker in this clip
- `speaker_gender`: The gender of the speaker (`m`/`f`)
- `speaker_name` (VoxCeleb1 only): The full name of the speaker in the clip
- `speaker_nationality` (VoxCeleb1 only): The speaker's country of origin
- `video_id`: The ID of the video from which this clip was taken
- `clip_index`: The index of the clip for this specific video
- `audio` (Audio dataset only): The audio signal data
### Data Splits
The dataset has a predefined dev set and test set. The dev set has been renamed to a "train" split.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
The dataset includes recordings of clips (mostly of celebrities and public figures) from public YouTube videos. The names of speakers in VoxCeleb1 are provided.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
The VoxCeleb authors request that anyone who uses VoxCeleb1 or VoxCeleb2 includes the following three citations:
```
@Article{Nagrani19,
author = "Arsha Nagrani and Joon~Son Chung and Weidi Xie and Andrew Zisserman",
title = "Voxceleb: Large-scale speaker verification in the wild",
journal = "Computer Science and Language",
year = "2019",
publisher = "Elsevier",
}
@InProceedings{Chung18b,
author = "Chung, J.~S. and Nagrani, A. and Zisserman, A.",
title = "VoxCeleb2: Deep Speaker Recognition",
booktitle = "INTERSPEECH",
year = "2018",
}
@InProceedings{Nagrani17,
author = "Nagrani, A. and Chung, J.~S. and Zisserman, A.",
title = "VoxCeleb: a large-scale speaker identification dataset",
booktitle = "INTERSPEECH",
year = "2017",
}
```
### Contributions
Thanks to [@101arrowz](https://github.com/101arrowz) for adding this dataset.
| 101arrowz/vox_celeb | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_categories:image-classification",
"task_ids:speaker-identification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"size_categories:10K<n<100K",
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T01:43:46+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": [], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K", "10K<n<100K", "100K<n<1M"], "source_datasets": [], "task_categories": ["automatic-speech-recognition", "audio-classification", "image-classification"], "task_ids": ["speaker-identification"], "pretty_name": "VoxCeleb", "tags": []} | 2023-08-20T02:04:07+00:00 | [] | [] | TAGS
#task_categories-automatic-speech-recognition #task_categories-audio-classification #task_categories-image-classification #task_ids-speaker-identification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #license-cc-by-4.0 #region-us
|
# Dataset Card for VoxCeleb
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
### Dataset Summary
VoxCeleb is an audio-visual dataset consisting of short clips of human speech, extracted from interview videos uploaded to YouTube.
NOTE: Although this dataset can be automatically downloaded, you must manually request credentials to access it from the creators' website.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
Each datapoint has a path to the audio/video clip along with metadata about the speaker.
### Data Fields
Each row includes the following fields:
- 'file': The path to the audio/video clip
- 'file_format': The file format in which the clip is stored (e.g. 'wav', 'aac', 'mp4')
- 'dataset_id': The ID of the dataset this clip is from ('vox1', 'vox2')
- 'speaker_id': The ID of the speaker in this clip
- 'speaker_gender': The gender of the speaker ('m'/'f')
- 'speaker_name' (VoxCeleb1 only): The full name of the speaker in the clip
- 'speaker_nationality' (VoxCeleb1 only): The speaker's country of origin
- 'video_id': The ID of the video from which this clip was taken
- 'clip_index': The index of the clip for this specific video
- 'audio' (Audio dataset only): The audio signal data
### Data Splits
The dataset has a predefined dev set and test set. The dev set has been renamed to a "train" split.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset includes recordings of clips (mostly of celebrities and public figures) from public YouTube videos. The names of speakers in VoxCeleb1 are provided.
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
The VoxCeleb authors request that anyone who uses VoxCeleb1 or VoxCeleb2 includes the following three citations:
### Contributions
Thanks to @101arrowz for adding this dataset.
| [
"# Dataset Card for VoxCeleb",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description",
"### Dataset Summary\n\nVoxCeleb is an audio-visual dataset consisting of short clips of human speech, extracted from interview videos uploaded to YouTube.\n\nNOTE: Although this dataset can be automatically downloaded, you must manually request credentials to access it from the creators' website.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances\n\nEach datapoint has a path to the audio/video clip along with metadata about the speaker.",
"### Data Fields\n\nEach row includes the following fields:\n- 'file': The path to the audio/video clip\n- 'file_format': The file format in which the clip is stored (e.g. 'wav', 'aac', 'mp4')\n- 'dataset_id': The ID of the dataset this clip is from ('vox1', 'vox2')\n- 'speaker_id': The ID of the speaker in this clip\n- 'speaker_gender': The gender of the speaker ('m'/'f')\n- 'speaker_name' (VoxCeleb1 only): The full name of the speaker in the clip\n- 'speaker_nationality' (VoxCeleb1 only): The speaker's country of origin\n- 'video_id': The ID of the video from which this clip was taken\n- 'clip_index': The index of the clip for this specific video\n- 'audio' (Audio dataset only): The audio signal data",
"### Data Splits\n\nThe dataset has a predefined dev set and test set. The dev set has been renamed to a \"train\" split.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset includes recordings of clips (mostly of celebrities and public figures) from public YouTube videos. The names of speakers in VoxCeleb1 are provided.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\n\n\n\n\nThe VoxCeleb authors request that anyone who uses VoxCeleb1 or VoxCeleb2 includes the following three citations:",
"### Contributions\n\nThanks to @101arrowz for adding this dataset."
] | [
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #task_categories-image-classification #task_ids-speaker-identification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #license-cc-by-4.0 #region-us \n",
"# Dataset Card for VoxCeleb",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description",
"### Dataset Summary\n\nVoxCeleb is an audio-visual dataset consisting of short clips of human speech, extracted from interview videos uploaded to YouTube.\n\nNOTE: Although this dataset can be automatically downloaded, you must manually request credentials to access it from the creators' website.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances\n\nEach datapoint has a path to the audio/video clip along with metadata about the speaker.",
"### Data Fields\n\nEach row includes the following fields:\n- 'file': The path to the audio/video clip\n- 'file_format': The file format in which the clip is stored (e.g. 'wav', 'aac', 'mp4')\n- 'dataset_id': The ID of the dataset this clip is from ('vox1', 'vox2')\n- 'speaker_id': The ID of the speaker in this clip\n- 'speaker_gender': The gender of the speaker ('m'/'f')\n- 'speaker_name' (VoxCeleb1 only): The full name of the speaker in the clip\n- 'speaker_nationality' (VoxCeleb1 only): The speaker's country of origin\n- 'video_id': The ID of the video from which this clip was taken\n- 'clip_index': The index of the clip for this specific video\n- 'audio' (Audio dataset only): The audio signal data",
"### Data Splits\n\nThe dataset has a predefined dev set and test set. The dev set has been renamed to a \"train\" split.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset includes recordings of clips (mostly of celebrities and public figures) from public YouTube videos. The names of speakers in VoxCeleb1 are provided.",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\n\n\n\n\nThe VoxCeleb authors request that anyone who uses VoxCeleb1 or VoxCeleb2 includes the following three citations:",
"### Contributions\n\nThanks to @101arrowz for adding this dataset."
] |
5b3589f31d91b16f10515e3157e2579cd936b438 | # Dataset Card for "FGVC_Aircraft_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Multimodal-Fatima/FGVC_Aircraft_train | [
"region:us"
] | 2022-11-13T05:05:42+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "family", "dtype": {"class_label": {"names": {"0": "A300", "1": "A310", "2": "A320", "3": "A330", "4": "A340", "5": "A380", "6": "ATR-42", "7": "ATR-72", "8": "An-12", "9": "BAE 146", "10": "BAE-125", "11": "Beechcraft 1900", "12": "Boeing 707", "13": "Boeing 717", "14": "Boeing 727", "15": "Boeing 737", "16": "Boeing 747", "17": "Boeing 757", "18": "Boeing 767", "19": "Boeing 777", "20": "C-130", "21": "C-47", "22": "CRJ-200", "23": "CRJ-700", "24": "Cessna 172", "25": "Cessna 208", "26": "Cessna Citation", "27": "Challenger 600", "28": "DC-10", "29": "DC-3", "30": "DC-6", "31": "DC-8", "32": "DC-9", "33": "DH-82", "34": "DHC-1", "35": "DHC-6", "36": "DR-400", "37": "Dash 8", "38": "Dornier 328", "39": "EMB-120", "40": "Embraer E-Jet", "41": "Embraer ERJ 145", "42": "Embraer Legacy 600", "43": "Eurofighter Typhoon", "44": "F-16", "45": "F/A-18", "46": "Falcon 2000", "47": "Falcon 900", "48": "Fokker 100", "49": "Fokker 50", "50": "Fokker 70", "51": "Global Express", "52": "Gulfstream", "53": "Hawk T1", "54": "Il-76", "55": "King Air", "56": "L-1011", "57": "MD-11", "58": "MD-80", "59": "MD-90", "60": "Metroliner", "61": "PA-28", "62": "SR-20", "63": "Saab 2000", "64": "Saab 340", "65": "Spitfire", "66": "Tornado", "67": "Tu-134", "68": "Tu-154", "69": "Yak-42"}}}}, {"name": "manufacturer", "dtype": {"class_label": {"names": {"0": "ATR", "1": "Airbus", "2": "Antonov", "3": "Beechcraft", "4": "Boeing", "5": "Bombardier Aerospace", "6": "British Aerospace", "7": "Canadair", "8": "Cessna", "9": "Cirrus Aircraft", "10": "Dassault Aviation", "11": "Dornier", "12": "Douglas Aircraft Company", "13": "Embraer", "14": "Eurofighter", "15": "Fairchild", "16": "Fokker", "17": "Gulfstream Aerospace", "18": "Ilyushin", "19": "Lockheed Corporation", "20": "Lockheed Martin", "21": "McDonnell Douglas", "22": "Panavia", "23": "Piper", "24": "Robin", "25": "Saab", "26": "Supermarine", "27": "Tupolev", "28": "Yakovlev", "29": "de Havilland"}}}}, {"name": "label", "dtype": {"class_label": {"names": {"0": "707-320", "1": "727-200", "2": "737-200", "3": "737-300", "4": "737-400", "5": "737-500", "6": "737-600", "7": "737-700", "8": "737-800", "9": "737-900", "10": "747-100", "11": "747-200", "12": "747-300", "13": "747-400", "14": "757-200", "15": "757-300", "16": "767-200", "17": "767-300", "18": "767-400", "19": "777-200", "20": "777-300", "21": "A300B4", "22": "A310", "23": "A318", "24": "A319", "25": "A320", "26": "A321", "27": "A330-200", "28": "A330-300", "29": "A340-200", "30": "A340-300", "31": "A340-500", "32": "A340-600", "33": "A380", "34": "ATR-42", "35": "ATR-72", "36": "An-12", "37": "BAE 146-200", "38": "BAE 146-300", "39": "BAE-125", "40": "Beechcraft 1900", "41": "Boeing 717", "42": "C-130", "43": "C-47", "44": "CRJ-200", "45": "CRJ-700", "46": "CRJ-900", "47": "Cessna 172", "48": "Cessna 208", "49": "Cessna 525", "50": "Cessna 560", "51": "Challenger 600", "52": "DC-10", "53": "DC-3", "54": "DC-6", "55": "DC-8", "56": "DC-9-30", "57": "DH-82", "58": "DHC-1", "59": "DHC-6", "60": "DHC-8-100", "61": "DHC-8-300", "62": "DR-400", "63": "Dornier 328", "64": "E-170", "65": "E-190", "66": "E-195", "67": "EMB-120", "68": "ERJ 135", "69": "ERJ 145", "70": "Embraer Legacy 600", "71": "Eurofighter Typhoon", "72": "F-16A/B", "73": "F/A-18", "74": "Falcon 2000", "75": "Falcon 900", "76": "Fokker 100", "77": "Fokker 50", "78": "Fokker 70", "79": "Global Express", "80": "Gulfstream IV", "81": "Gulfstream V", "82": "Hawk T1", "83": "Il-76", "84": "L-1011", "85": "MD-11", "86": "MD-80", "87": "MD-87", "88": "MD-90", "89": "Metroliner", "90": "Model B200", "91": "PA-28", "92": "SR-20", "93": "Saab 2000", "94": "Saab 340", "95": "Spitfire", "96": "Tornado", "97": "Tu-134", "98": "Tu-154", "99": "Yak-42"}}}}, {"name": "id", "dtype": "int64"}, {"name": "clip_tags_ViT_L_14", "sequence": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_ViT_L_14", "sequence": "string"}, {"name": "blip_caption", "dtype": "string"}, {"name": "LLM_Description_gpt3_downstream_tasks_visual_genome_ViT_L_14", "sequence": "string"}, {"name": "Attributes_ViT_L_14_text_davinci_003_full", "sequence": "string"}, {"name": "Attributes_ViT_L_14_text_davinci_003_fgvc", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_with_openai_classes", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_wo_openai_classes", "sequence": "string"}, {"name": "clip_tags_ViT_L_14_simple_specific", "dtype": "string"}, {"name": "clip_tags_ViT_L_14_ensemble_specific", "dtype": "string"}, {"name": "clip_tags_ViT_B_16_simple_specific", "dtype": "string"}, {"name": "clip_tags_ViT_B_16_ensemble_specific", "dtype": "string"}, {"name": "clip_tags_ViT_B_32_simple_specific", "dtype": "string"}, {"name": "clip_tags_ViT_B_32_ensemble_specific", "dtype": "string"}, {"name": "Attributes_ViT_B_16_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "Attributes_LAION_ViT_H_14_2B_descriptors_text_davinci_003_full", "sequence": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_simple_specific", "dtype": "string"}, {"name": "clip_tags_LAION_ViT_H_14_2B_ensemble_specific", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 931613762.0, "num_examples": 3334}], "download_size": 925638163, "dataset_size": 931613762.0}} | 2023-05-04T04:30:31+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "FGVC_Aircraft_train"
More Information needed | [
"# Dataset Card for \"FGVC_Aircraft_train\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"FGVC_Aircraft_train\"\n\nMore Information needed"
] |
cc77aedcfe59f728c64449bf35522d0201c49e7f | # Dataset Card for KLAID
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Other Inquiries](#other_inquiries)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://klaid.net](https://klaid.net)
- **Leaderboard:** [https://klaid.net](https://klaid.net)
- **Point of Contact:** [[email protected]]([email protected])
### Dataset Summary
Korean Legal Artificial Intelligence Datasets(KLAID) is a dataset for the development of Korean legal artificial intelligence technology. This time we offer 1 task, which is legal judgment prediction(LJP).
### Supported Tasks and Leaderboards
Legal Judgment Prediction(LJP)
### Languages
`korean`
### How to use
```python
from datasets import load_dataset
# legal judgment prediction
dataset = load_dataset("lawcompany/KLAID", 'ljp')
```
## Dataset Structure
### Data Instances
#### ljp
An example of 'train' looks as follows.
```
{
'fact': '피고인은 2022. 11. 14. 혈중알콜농도 0.123%의 술에 취한 상태로 승용차를 운전하였다.',
'laws_service': '도로교통법 제148조의2 제3항 제2호,도로교통법 제44조 제1항',
'laws_service_id': 7
}
```
Other References
You can refer to each label's 'laws service content' [here](https://storage.googleapis.com/klaid/ljp/dataset/ljp_laws_service_content.json).
'Laws service content' is the statute([source](https://www.law.go.kr/)) corresponding to each label.
### Data Fields
#### ljp
+ "fact": a `string` feature
+ "laws_service": a `string` feature
+ "laws_service_id": a classification label, with 177 legal judgment values
[More Information Needed](https://klaid.net/tasks-1)
### Data Splits
#### ljp
+ train: 161,192
## Dataset Creation
### Curation Rationale
The legal domain is arguably one of the most expertise fields that require expert knowledge to comprehend. Natural language processing requires many aspects, and we focus on the dataset requirements. As a gold standard is necessary for the testing and the training of a neural model, we hope that our dataset release will help the advances in natural language processing in the legal domain, especially for those for the Korean legal system.
### Source Data
These are datasets based on Korean legal case data.
### Personal and Sensitive Information
Due to the nature of legal case data, personal and sensitive information may be included. Therefore, in order to prevent problems that may occur with personal and sensitive information, we proceeded to de-realize the legal case.
## Considerations for Using the Data
### Other Known Limitations
We plan to upload more data and update them as some of the court records may be revised from now on, based on the ever-evolving legal system.
## Additional Information
### Other Inquiries
[[email protected]]([email protected])
### Licensing Information
Copyright 2022-present [Law&Company Co. Ltd.](https://career.lawcompany.co.kr/)
Licensed under the CC-BY-NC-ND-4.0
### Contributions
[More Information Needed] | lawcompany/KLAID | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"multilinguality:monolingual",
"language:ko",
"license:cc-by-nc-nd-4.0",
"region:us"
] | 2022-11-13T05:21:05+00:00 | {"language": "ko", "license": "cc-by-nc-nd-4.0", "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"], "pretty_name": "KLAID", "viewer": true} | 2022-11-17T07:09:10+00:00 | [] | [
"ko"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #multilinguality-monolingual #language-Korean #license-cc-by-nc-nd-4.0 #region-us
| # Dataset Card for KLAID
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Personal and Sensitive Information
- Considerations for Using the Data
- Other Known Limitations
- Additional Information
- Other Inquiries
- Licensing Information
- Contributions
## Dataset Description
- Homepage: URL
- Leaderboard: URL
- Point of Contact: klaid@URL
### Dataset Summary
Korean Legal Artificial Intelligence Datasets(KLAID) is a dataset for the development of Korean legal artificial intelligence technology. This time we offer 1 task, which is legal judgment prediction(LJP).
### Supported Tasks and Leaderboards
Legal Judgment Prediction(LJP)
### Languages
'korean'
### How to use
## Dataset Structure
### Data Instances
#### ljp
An example of 'train' looks as follows.
Other References
You can refer to each label's 'laws service content' here.
'Laws service content' is the statute(source) corresponding to each label.
### Data Fields
#### ljp
+ "fact": a 'string' feature
+ "laws_service": a 'string' feature
+ "laws_service_id": a classification label, with 177 legal judgment values
### Data Splits
#### ljp
+ train: 161,192
## Dataset Creation
### Curation Rationale
The legal domain is arguably one of the most expertise fields that require expert knowledge to comprehend. Natural language processing requires many aspects, and we focus on the dataset requirements. As a gold standard is necessary for the testing and the training of a neural model, we hope that our dataset release will help the advances in natural language processing in the legal domain, especially for those for the Korean legal system.
### Source Data
These are datasets based on Korean legal case data.
### Personal and Sensitive Information
Due to the nature of legal case data, personal and sensitive information may be included. Therefore, in order to prevent problems that may occur with personal and sensitive information, we proceeded to de-realize the legal case.
## Considerations for Using the Data
### Other Known Limitations
We plan to upload more data and update them as some of the court records may be revised from now on, based on the ever-evolving legal system.
## Additional Information
### Other Inquiries
klaid@URL
### Licensing Information
Copyright 2022-present Law&Company Co. Ltd.
Licensed under the CC-BY-NC-ND-4.0
### Contributions
| [
"# Dataset Card for KLAID",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Other Known Limitations\n- Additional Information\n - Other Inquiries\n - Licensing Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Leaderboard: URL\n- Point of Contact: klaid@URL",
"### Dataset Summary\nKorean Legal Artificial Intelligence Datasets(KLAID) is a dataset for the development of Korean legal artificial intelligence technology. This time we offer 1 task, which is legal judgment prediction(LJP).",
"### Supported Tasks and Leaderboards\nLegal Judgment Prediction(LJP)",
"### Languages\n'korean'",
"### How to use",
"## Dataset Structure",
"### Data Instances",
"#### ljp\nAn example of 'train' looks as follows.\n\n Other References\nYou can refer to each label's 'laws service content' here.\n'Laws service content' is the statute(source) corresponding to each label.",
"### Data Fields",
"#### ljp\n+ \"fact\": a 'string' feature\n+ \"laws_service\": a 'string' feature\n+ \"laws_service_id\": a classification label, with 177 legal judgment values",
"### Data Splits",
"#### ljp\n+ train: 161,192",
"## Dataset Creation",
"### Curation Rationale\nThe legal domain is arguably one of the most expertise fields that require expert knowledge to comprehend. Natural language processing requires many aspects, and we focus on the dataset requirements. As a gold standard is necessary for the testing and the training of a neural model, we hope that our dataset release will help the advances in natural language processing in the legal domain, especially for those for the Korean legal system.",
"### Source Data\nThese are datasets based on Korean legal case data.",
"### Personal and Sensitive Information\nDue to the nature of legal case data, personal and sensitive information may be included. Therefore, in order to prevent problems that may occur with personal and sensitive information, we proceeded to de-realize the legal case.",
"## Considerations for Using the Data",
"### Other Known Limitations\nWe plan to upload more data and update them as some of the court records may be revised from now on, based on the ever-evolving legal system.",
"## Additional Information",
"### Other Inquiries\nklaid@URL",
"### Licensing Information\nCopyright 2022-present Law&Company Co. Ltd.\n\nLicensed under the CC-BY-NC-ND-4.0",
"### Contributions"
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #multilinguality-monolingual #language-Korean #license-cc-by-nc-nd-4.0 #region-us \n",
"# Dataset Card for KLAID",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Other Known Limitations\n- Additional Information\n - Other Inquiries\n - Licensing Information\n - Contributions",
"## Dataset Description\n- Homepage: URL\n- Leaderboard: URL\n- Point of Contact: klaid@URL",
"### Dataset Summary\nKorean Legal Artificial Intelligence Datasets(KLAID) is a dataset for the development of Korean legal artificial intelligence technology. This time we offer 1 task, which is legal judgment prediction(LJP).",
"### Supported Tasks and Leaderboards\nLegal Judgment Prediction(LJP)",
"### Languages\n'korean'",
"### How to use",
"## Dataset Structure",
"### Data Instances",
"#### ljp\nAn example of 'train' looks as follows.\n\n Other References\nYou can refer to each label's 'laws service content' here.\n'Laws service content' is the statute(source) corresponding to each label.",
"### Data Fields",
"#### ljp\n+ \"fact\": a 'string' feature\n+ \"laws_service\": a 'string' feature\n+ \"laws_service_id\": a classification label, with 177 legal judgment values",
"### Data Splits",
"#### ljp\n+ train: 161,192",
"## Dataset Creation",
"### Curation Rationale\nThe legal domain is arguably one of the most expertise fields that require expert knowledge to comprehend. Natural language processing requires many aspects, and we focus on the dataset requirements. As a gold standard is necessary for the testing and the training of a neural model, we hope that our dataset release will help the advances in natural language processing in the legal domain, especially for those for the Korean legal system.",
"### Source Data\nThese are datasets based on Korean legal case data.",
"### Personal and Sensitive Information\nDue to the nature of legal case data, personal and sensitive information may be included. Therefore, in order to prevent problems that may occur with personal and sensitive information, we proceeded to de-realize the legal case.",
"## Considerations for Using the Data",
"### Other Known Limitations\nWe plan to upload more data and update them as some of the court records may be revised from now on, based on the ever-evolving legal system.",
"## Additional Information",
"### Other Inquiries\nklaid@URL",
"### Licensing Information\nCopyright 2022-present Law&Company Co. Ltd.\n\nLicensed under the CC-BY-NC-ND-4.0",
"### Contributions"
] |
f225875de980bdb87046d1f13438cdd999d22d2f | load_dataset("grullborg/league_style") | Sayaka457/Ehh | [
"region:us"
] | 2022-11-13T06:35:58+00:00 | {} | 2022-11-13T06:36:33+00:00 | [] | [] | TAGS
#region-us
| load_dataset("grullborg/league_style") | [] | [
"TAGS\n#region-us \n"
] |
1dbed00d2d45f34d4b42691a17e3ffa04bb95a15 | # Dataset Card for "dataset-v-1.5"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | SDbiaseval/dataset-v-1.5 | [
"region:us"
] | 2022-11-13T09:15:33+00:00 | {"dataset_info": {"features": [{"name": "adjective", "dtype": "string"}, {"name": "profession", "dtype": "string"}, {"name": "seed", "dtype": "int32"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 11894248760.0, "num_examples": 315000}], "download_size": 11903715121, "dataset_size": 11894248760.0}} | 2022-11-13T15:58:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dataset-v-1.5"
More Information needed | [
"# Dataset Card for \"dataset-v-1.5\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset-v-1.5\"\n\nMore Information needed"
] |
ff4d1282bf7f51cbb41c11b75ea39f25c5db068e | # Dataset Card for "amazon-shoe-reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | asadisaghar/amazon-shoe-reviews | [
"region:us"
] | 2022-11-13T12:23:40+00:00 | {"dataset_info": {"features": [{"name": "labels", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1871962.8, "num_examples": 10000}, {"name": "train", "num_bytes": 16847665.2, "num_examples": 90000}], "download_size": 10939033, "dataset_size": 18719628.0}} | 2022-11-13T12:24:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "amazon-shoe-reviews"
More Information needed | [
"# Dataset Card for \"amazon-shoe-reviews\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"amazon-shoe-reviews\"\n\nMore Information needed"
] |
14010cfd2e4af424e6725b83d7e8cb78fedf43f3 | # Dataset Card for "msp_train_hubert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dlproject/msp_train_hubert | [
"region:us"
] | 2022-11-13T12:44:32+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 10872804940, "num_examples": 29939}], "download_size": 9851597205, "dataset_size": 10872804940}} | 2022-11-13T12:50:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "msp_train_hubert"
More Information needed | [
"# Dataset Card for \"msp_train_hubert\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"msp_train_hubert\"\n\nMore Information needed"
] |
f1c466fbd45944d1284d41ad49684efb16ab7ba1 | # Dataset Card for "msp_val_hubert"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dlproject/msp_val_hubert | [
"region:us"
] | 2022-11-13T12:50:12+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1895848620, "num_examples": 5213}], "download_size": 1773614710, "dataset_size": 1895848620}} | 2022-11-13T12:51:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "msp_val_hubert"
More Information needed | [
"# Dataset Card for \"msp_val_hubert\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"msp_val_hubert\"\n\nMore Information needed"
] |
ea85104b30f284f37a7b6e0813e00d0de7439402 |
# Dataset Card for ArchiMod Corpus
## Dataset Description
- **Homepage:** https://wortschatz.uni-leipzig.de/en/download/Swiss%20German
- **Repository:** https://huggingface.co/datasets/statworx/leipzip-swiss
### Dataset Summary
The ArchiMob corpus represents German linguistic varieties spoken within the territory of Switzerland. This corpus is the first electronic resource containing long samples of transcribed text in Swiss German, intended for studying the spatial distribution of morphosyntactic features and for natural language processing.
### Languages
Swiss-German
## Dataset Structure
### Data Instances
``
{
'sentence': Sentence in Swiss-German,
'label': Dialect as category
}
``
### Data Fields
`sentence`: Text as string.
`label`: Label as string.
### Data Splits
[More Information Needed]
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
https://www.spur.uzh.ch/en/departments/research/textgroup/ArchiMob.html
## Additional Information
### Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
### Citation Information
Scherrer, Y., T. Samardžić, E. Glaser (2019). "Digitising Swiss German -- How to process and study a polycentric spoken language". Language Resources and Evaluation. (First online)
Scherrer, Y., T. Samardžić, E. Glaser (2019). "ArchiMob: Ein multidialektales Korpus schweizerdeutscher Spontansprache". Linguistik Online, 98(5), 425-454. https://doi.org/10.13092/lo.98.5947
| statworx/swiss-dialects | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_ids:language-modeling",
"language_creators:found",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:ch",
"license:cc-by-nc-4.0",
"dialect",
"region:us"
] | 2022-11-13T13:50:21+00:00 | {"annotations_creators": [], "language_creators": ["found"], "language": ["ch"], "license": ["cc-by-nc-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-generation", "text-classification"], "task_ids": ["language-modeling"], "pretty_name": "ArchiMob Corpus", "tags": ["dialect"]} | 2022-11-21T16:18:32+00:00 | [] | [
"ch"
] | TAGS
#task_categories-text-generation #task_categories-text-classification #task_ids-language-modeling #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-Chamorro #license-cc-by-nc-4.0 #dialect #region-us
|
# Dataset Card for ArchiMod Corpus
## Dataset Description
- Homepage: URL
- Repository: URL
### Dataset Summary
The ArchiMob corpus represents German linguistic varieties spoken within the territory of Switzerland. This corpus is the first electronic resource containing long samples of transcribed text in Swiss German, intended for studying the spatial distribution of morphosyntactic features and for natural language processing.
### Languages
Swiss-German
## Dataset Structure
### Data Instances
''
{
'sentence': Sentence in Swiss-German,
'label': Dialect as category
}
''
### Data Fields
'sentence': Text as string.
'label': Label as string.
### Data Splits
## Dataset Creation
### Source Data
#### Initial Data Collection and Normalization
URL
## Additional Information
### Licensing Information
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Scherrer, Y., T. Samardžić, E. Glaser (2019). "Digitising Swiss German -- How to process and study a polycentric spoken language". Language Resources and Evaluation. (First online)
Scherrer, Y., T. Samardžić, E. Glaser (2019). "ArchiMob: Ein multidialektales Korpus schweizerdeutscher Spontansprache". Linguistik Online, 98(5), 425-454. URL
| [
"# Dataset Card for ArchiMod Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL",
"### Dataset Summary\n\nThe ArchiMob corpus represents German linguistic varieties spoken within the territory of Switzerland. This corpus is the first electronic resource containing long samples of transcribed text in Swiss German, intended for studying the spatial distribution of morphosyntactic features and for natural language processing.",
"### Languages\n\nSwiss-German",
"## Dataset Structure",
"### Data Instances\n\n''\n{\n 'sentence': Sentence in Swiss-German,\n 'label': Dialect as category\n}\n''",
"### Data Fields\n\n'sentence': Text as string.\n'label': Label as string.",
"### Data Splits",
"## Dataset Creation",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nURL",
"## Additional Information",
"### Licensing Information\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License\n\n\n\nScherrer, Y., T. Samardžić, E. Glaser (2019). \"Digitising Swiss German -- How to process and study a polycentric spoken language\". Language Resources and Evaluation. (First online) \n\nScherrer, Y., T. Samardžić, E. Glaser (2019). \"ArchiMob: Ein multidialektales Korpus schweizerdeutscher Spontansprache\". Linguistik Online, 98(5), 425-454. URL"
] | [
"TAGS\n#task_categories-text-generation #task_categories-text-classification #task_ids-language-modeling #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #language-Chamorro #license-cc-by-nc-4.0 #dialect #region-us \n",
"# Dataset Card for ArchiMod Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL",
"### Dataset Summary\n\nThe ArchiMob corpus represents German linguistic varieties spoken within the territory of Switzerland. This corpus is the first electronic resource containing long samples of transcribed text in Swiss German, intended for studying the spatial distribution of morphosyntactic features and for natural language processing.",
"### Languages\n\nSwiss-German",
"## Dataset Structure",
"### Data Instances\n\n''\n{\n 'sentence': Sentence in Swiss-German,\n 'label': Dialect as category\n}\n''",
"### Data Fields\n\n'sentence': Text as string.\n'label': Label as string.",
"### Data Splits",
"## Dataset Creation",
"### Source Data",
"#### Initial Data Collection and Normalization\n\nURL",
"## Additional Information",
"### Licensing Information\n\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License\n\n\n\nScherrer, Y., T. Samardžić, E. Glaser (2019). \"Digitising Swiss German -- How to process and study a polycentric spoken language\". Language Resources and Evaluation. (First online) \n\nScherrer, Y., T. Samardžić, E. Glaser (2019). \"ArchiMob: Ein multidialektales Korpus schweizerdeutscher Spontansprache\". Linguistik Online, 98(5), 425-454. URL"
] |
6dc8189638a8cf250ef745c571ee9330b0d5417d | # Dataset Card for "multi-label-classification-test-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | andreotte/multi-label-classification-test-small | [
"region:us"
] | 2022-11-13T15:07:44+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "Door", "1": "Eaves", "2": "Gutter", "3": "Vegetation", "4": "Vent", "5": "Window"}}}}, {"name": "pixel_values", "dtype": "image"}], "splits": [{"name": "test", "num_bytes": 1579714.0, "num_examples": 25}, {"name": "train", "num_bytes": 3593924.0, "num_examples": 59}], "download_size": 5175857, "dataset_size": 5173638.0}} | 2022-11-13T15:07:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "multi-label-classification-test-small"
More Information needed | [
"# Dataset Card for \"multi-label-classification-test-small\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"multi-label-classification-test-small\"\n\nMore Information needed"
] |
c1dbebe3462373a1ed368de0a04eb4df8117bda0 | annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- artistic-2.0
multilinguality:
- monolingual
pretty_name: a dataset of Opera da Tre Soldi by Berliner Ensemble
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-to-image
task_ids: []
| Gr3en/OperaDa3Soldi | [
"region:us"
] | 2022-11-13T15:23:23+00:00 | {} | 2022-11-13T15:32:51+00:00 | [] | [] | TAGS
#region-us
| annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- artistic-2.0
multilinguality:
- monolingual
pretty_name: a dataset of Opera da Tre Soldi by Berliner Ensemble
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-to-image
task_ids: []
| [] | [
"TAGS\n#region-us \n"
] |
3413a80d3809c44e8b5e06911f07f157c7cebe98 | annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- artistic-2.0
multilinguality:
- monolingual
pretty_name: a dataset of Opera da Tre Soldi by Berliner Ensemble
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-to-image
task_ids: []
| Gr3en/MusiForPercussion2 | [
"region:us"
] | 2022-11-13T16:18:06+00:00 | {} | 2022-11-13T16:20:15+00:00 | [] | [] | TAGS
#region-us
| annotations_creators:
- found
language:
- en
language_creators:
- found
license:
- artistic-2.0
multilinguality:
- monolingual
pretty_name: a dataset of Opera da Tre Soldi by Berliner Ensemble
size_categories:
- n<1K
source_datasets:
- original
tags: []
task_categories:
- text-to-image
task_ids: []
| [] | [
"TAGS\n#region-us \n"
] |
9c823ccc449c7e2e2355902a9f45c9addaaf6b02 | # Dataset Card for "dataset-v-1.4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | SDbiaseval/jobs-sd-1.4 | [
"region:us"
] | 2022-11-13T16:19:41+00:00 | {"dataset_info": {"features": [{"name": "adjective", "dtype": "string"}, {"name": "profession", "dtype": "string"}, {"name": "no", "dtype": "int32"}, {"name": "image_path", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1161828556.5, "num_examples": 31500}], "download_size": 1167871729, "dataset_size": 1161828556.5}} | 2022-12-12T20:56:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dataset-v-1.4"
More Information needed | [
"# Dataset Card for \"dataset-v-1.4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset-v-1.4\"\n\nMore Information needed"
] |
4c67cc26bb3d44cf14848c8e656f011feda8afd3 | # Dataset Card for "legaltokenized512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vegeta/legaltokenized512 | [
"region:us"
] | 2022-11-13T18:00:34+00:00 | {"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 70986447784, "num_examples": 10645838}, {"name": "validation", "num_bytes": 7747149120, "num_examples": 1161840}], "download_size": 14173273124, "dataset_size": 78733596904}} | 2022-11-18T08:03:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "legaltokenized512"
More Information needed | [
"# Dataset Card for \"legaltokenized512\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"legaltokenized512\"\n\nMore Information needed"
] |
e526476af6384a8c8c1b9baaa5b6e5717bac2980 |
# Dataset Card for AnEM
## Dataset Description
- **Homepage:** http://www.nactem.ac.uk/anatomy/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,COREF,RE
AnEM corpus is a domain- and species-independent resource manually annotated for anatomical
entity mentions using a fine-grained classification system. The corpus consists of 500 documents
(over 90,000 words) selected randomly from citation abstracts and full-text papers with
the aim of making the corpus representative of the entire available biomedical scientific
literature. The corpus annotation covers mentions of both healthy and pathological anatomical
entities and contains over 3,000 annotated mentions.
## Citation Information
```
@inproceedings{ohta-etal-2012-open,
author = {Ohta, Tomoko and Pyysalo, Sampo and Tsujii, Jun{'}ichi and Ananiadou, Sophia},
title = {Open-domain Anatomical Entity Mention Detection},
journal = {},
volume = {W12-43},
year = {2012},
url = {https://aclanthology.org/W12-4304},
doi = {},
biburl = {},
bibsource = {},
publisher = {Association for Computational Linguistics}
}
```
| bigbio/an_em | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-11-13T18:05:07+00:00 | {"language": ["en"], "license": "cc-by-sa-3.0", "multilinguality": "monolingual", "pretty_name": "AnEM", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_SA_3p0", "homepage": "http://www.nactem.ac.uk/anatomy/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "COREFERENCE_RESOLUTION", "RELATION_EXTRACTION"]} | 2022-12-22T15:43:14+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us
|
# Dataset Card for AnEM
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,COREF,RE
AnEM corpus is a domain- and species-independent resource manually annotated for anatomical
entity mentions using a fine-grained classification system. The corpus consists of 500 documents
(over 90,000 words) selected randomly from citation abstracts and full-text papers with
the aim of making the corpus representative of the entire available biomedical scientific
literature. The corpus annotation covers mentions of both healthy and pathological anatomical
entities and contains over 3,000 annotated mentions.
| [
"# Dataset Card for AnEM",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,COREF,RE\n\n\nAnEM corpus is a domain- and species-independent resource manually annotated for anatomical\nentity mentions using a fine-grained classification system. The corpus consists of 500 documents\n(over 90,000 words) selected randomly from citation abstracts and full-text papers with\nthe aim of making the corpus representative of the entire available biomedical scientific\nliterature. The corpus annotation covers mentions of both healthy and pathological anatomical\nentities and contains over 3,000 annotated mentions."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us \n",
"# Dataset Card for AnEM",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,COREF,RE\n\n\nAnEM corpus is a domain- and species-independent resource manually annotated for anatomical\nentity mentions using a fine-grained classification system. The corpus consists of 500 documents\n(over 90,000 words) selected randomly from citation abstracts and full-text papers with\nthe aim of making the corpus representative of the entire available biomedical scientific\nliterature. The corpus annotation covers mentions of both healthy and pathological anatomical\nentities and contains over 3,000 annotated mentions."
] |
c5aeeab2ea5865c7d9bafddcf1fbab5a230a6607 |
# Dataset Card for AnatEM
## Dataset Description
- **Homepage:** http://nactem.ac.uk/anatomytagger/#AnatEM
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The extended Anatomical Entity Mention corpus (AnatEM) consists of 1212 documents (approx. 250,000 words) manually annotated to identify over 13,000 mentions of anatomical entities. Each annotation is assigned one of 12 granularity-based types such as Cellular component, Tissue and Organ, defined with reference to the Common Anatomy Reference Ontology.
## Citation Information
```
@article{pyysalo2014anatomical,
title={Anatomical entity mention recognition at literature scale},
author={Pyysalo, Sampo and Ananiadou, Sophia},
journal={Bioinformatics},
volume={30},
number={6},
pages={868--875},
year={2014},
publisher={Oxford University Press}
}
```
| bigbio/anat_em | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-11-13T18:26:03+00:00 | {"language": ["en"], "license": "cc-by-sa-3.0", "multilinguality": "monolingual", "pretty_name": "AnatEM", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_SA_3p0", "homepage": "http://nactem.ac.uk/anatomytagger/#AnatEM", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:43:16+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us
|
# Dataset Card for AnatEM
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
The extended Anatomical Entity Mention corpus (AnatEM) consists of 1212 documents (approx. 250,000 words) manually annotated to identify over 13,000 mentions of anatomical entities. Each annotation is assigned one of 12 granularity-based types such as Cellular component, Tissue and Organ, defined with reference to the Common Anatomy Reference Ontology.
| [
"# Dataset Card for AnatEM",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe extended Anatomical Entity Mention corpus (AnatEM) consists of 1212 documents (approx. 250,000 words) manually annotated to identify over 13,000 mentions of anatomical entities. Each annotation is assigned one of 12 granularity-based types such as Cellular component, Tissue and Organ, defined with reference to the Common Anatomy Reference Ontology."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us \n",
"# Dataset Card for AnatEM",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe extended Anatomical Entity Mention corpus (AnatEM) consists of 1212 documents (approx. 250,000 words) manually annotated to identify over 13,000 mentions of anatomical entities. Each annotation is assigned one of 12 granularity-based types such as Cellular component, Tissue and Organ, defined with reference to the Common Anatomy Reference Ontology."
] |
3282bbaa07ef9add1d9b313f3e9780a88406bca4 |
# Dataset Card for AskAPatient
## Dataset Description
- **Homepage:** https://zenodo.org/record/55013
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The AskAPatient dataset contains medical concepts written on social media mapped to how they are formally written in medical ontologies (SNOMED-CT and AMT).
## Citation Information
```
@inproceedings{limsopatham-collier-2016-normalising,
title = "Normalising Medical Concepts in Social Media Texts by Learning Semantic Representation",
author = "Limsopatham, Nut and
Collier, Nigel",
booktitle = "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2016",
address = "Berlin, Germany",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P16-1096",
doi = "10.18653/v1/P16-1096",
pages = "1014--1023",
}
```
| bigbio/ask_a_patient | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T18:26:06+00:00 | {"language": ["en"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "AskAPatient", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://zenodo.org/record/55013", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:43:18+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for AskAPatient
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
The AskAPatient dataset contains medical concepts written on social media mapped to how they are formally written in medical ontologies (SNOMED-CT and AMT).
| [
"# Dataset Card for AskAPatient",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\n\nThe AskAPatient dataset contains medical concepts written on social media mapped to how they are formally written in medical ontologies (SNOMED-CT and AMT)."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for AskAPatient",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\n\nThe AskAPatient dataset contains medical concepts written on social media mapped to how they are formally written in medical ontologies (SNOMED-CT and AMT)."
] |
d32845d9684db95f6f5922416776d4bfa21fdace | This is a marked dataset for Ukrainian language. It consists of sentences marked 0, 1 or 2 for negative, neutral or positive mode respectively. The dataset is based on the classic text Shadows of Forgotten Ancestors written by Mykhailo Kotsiubynsky. The markup of the sentences was done automatically based on the lists of positive and negative words from the Sentiment Lexicons for All Major Languages project (Chen & Skiena, ACL 2014). These lists were checked and edited manually by me to exclude ambiguous and mistakenly included words.
| SergiiGurbych/sent_anal_ukr_tzp | [
"region:us"
] | 2022-11-13T18:48:00+00:00 | {} | 2022-11-15T23:17:42+00:00 | [] | [] | TAGS
#region-us
| This is a marked dataset for Ukrainian language. It consists of sentences marked 0, 1 or 2 for negative, neutral or positive mode respectively. The dataset is based on the classic text Shadows of Forgotten Ancestors written by Mykhailo Kotsiubynsky. The markup of the sentences was done automatically based on the lists of positive and negative words from the Sentiment Lexicons for All Major Languages project (Chen & Skiena, ACL 2014). These lists were checked and edited manually by me to exclude ambiguous and mistakenly included words.
| [] | [
"TAGS\n#region-us \n"
] |
237a1a95c35094a56d149d89d7937597a5e1d4cd |

This is the dataset used for making the model : https://huggingface.co/Guizmus/AnimeChanStyle
The images were made by the users of Stable Diffusion discord using CreativeML-OpenRail-M licenced models, in the intent to make this dataset.
90 pictures captioned with their content by hand, with the suffix ",AnimeChan Style"
The collection process was made public during less than a day, until enough variety was introduced to train through a Dreambooth method a style corresponding to the different members of this community
The picture captioned are available in [this zip file](https://huggingface.co/datasets/Guizmus/AnimeChanStyle/resolve/main/AnimeChanStyle%20v2.3.zip) | Guizmus/AnimeChanStyle | [
"license:creativeml-openrail-m",
"region:us"
] | 2022-11-13T21:13:37+00:00 | {"license": "creativeml-openrail-m", "thumbnail": "https://huggingface.co/datasets/Guizmus/AnimeChanStyle/resolve/main/showcase_dataset.jpg"} | 2022-11-14T23:45:20+00:00 | [] | [] | TAGS
#license-creativeml-openrail-m #region-us
|
!showcase
This is the dataset used for making the model : URL
The images were made by the users of Stable Diffusion discord using CreativeML-OpenRail-M licenced models, in the intent to make this dataset.
90 pictures captioned with their content by hand, with the suffix ",AnimeChan Style"
The collection process was made public during less than a day, until enough variety was introduced to train through a Dreambooth method a style corresponding to the different members of this community
The picture captioned are available in this zip file | [] | [
"TAGS\n#license-creativeml-openrail-m #region-us \n"
] |
2105e19baf41eaf8459282bc7fcbbd2e28aca299 | # Dataset Card for "pile-tokenized-2b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NeelNanda/pile-old-tokenized-2b | [
"region:us"
] | 2022-11-13T21:17:07+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 8200000000, "num_examples": 2000000}], "download_size": 3352864661, "dataset_size": 8200000000}} | 2022-11-13T21:29:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "pile-tokenized-2b"
More Information needed | [
"# Dataset Card for \"pile-tokenized-2b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"pile-tokenized-2b\"\n\nMore Information needed"
] |
b7479f18ce44afc58adf70e33ac7aa7be7e37257 | # Dataset Card for "c4-code-tokenized-2b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | NeelNanda/c4-code-tokenized-2b | [
"region:us"
] | 2022-11-13T21:42:47+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 13581607992, "num_examples": 1657102}], "download_size": 2953466988, "dataset_size": 13581607992}} | 2022-11-13T21:54:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "c4-code-tokenized-2b"
More Information needed | [
"# Dataset Card for \"c4-code-tokenized-2b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"c4-code-tokenized-2b\"\n\nMore Information needed"
] |
4f478773228f7640056a8d6230dd8be0120878fd |
# Dataset Card for BC5CDR
## Dataset Description
- **Homepage:** http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE
The BioCreative V Chemical Disease Relation (CDR) dataset is a large annotated text corpus of human annotations of all chemicals, diseases and their interactions in 1,500 PubMed articles.
## Citation Information
```
@article{DBLP:journals/biodb/LiSJSWLDMWL16,
author = {Jiao Li and
Yueping Sun and
Robin J. Johnson and
Daniela Sciaky and
Chih{-}Hsuan Wei and
Robert Leaman and
Allan Peter Davis and
Carolyn J. Mattingly and
Thomas C. Wiegers and
Zhiyong Lu},
title = {BioCreative {V} {CDR} task corpus: a resource for chemical disease
relation extraction},
journal = {Database J. Biol. Databases Curation},
volume = {2016},
year = {2016},
url = {https://doi.org/10.1093/database/baw068},
doi = {10.1093/database/baw068},
timestamp = {Thu, 13 Aug 2020 12:41:41 +0200},
biburl = {https://dblp.org/rec/journals/biodb/LiSJSWLDMWL16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| bigbio/bc5cdr | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:06:13+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BC5CDR", "bigbio_language": ["English"], "bigbio_license_shortname": "PUBLIC_DOMAIN_MARK_1p0", "homepage": "http://www.biocreative.org/tasks/biocreative-v/track-3-cdr/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION", "RELATION_EXTRACTION"]} | 2023-12-07T03:52:56+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BC5CDR
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED,RE
The BioCreative V Chemical Disease Relation (CDR) dataset is a large annotated text corpus of human annotations of all chemicals, diseases and their interactions in 1,500 PubMed articles.
| [
"# Dataset Card for BC5CDR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,RE\n\n\nThe BioCreative V Chemical Disease Relation (CDR) dataset is a large annotated text corpus of human annotations of all chemicals, diseases and their interactions in 1,500 PubMed articles."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BC5CDR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,RE\n\n\nThe BioCreative V Chemical Disease Relation (CDR) dataset is a large annotated text corpus of human annotations of all chemicals, diseases and their interactions in 1,500 PubMed articles."
] |
4707df98cc3cc8f8f684ca1b42f19dbaf83b21c3 |
# Dataset Card for BC7-LitCovid
## Dataset Description
- **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-5/
- **Pubmed:** True
- **Public:** True
- **Tasks:** TXTCLASS
The training and development datasets contain the publicly-available text of over 30 thousand COVID-19-related articles and their metadata (e.g., title, abstract, journal). Articles in both datasets have been manually reviewed and articles annotated by in-house models.
## Citation Information
```
@inproceedings{chen2021overview,
title = {
Overview of the BioCreative VII LitCovid Track: multi-label topic
classification for COVID-19 literature annotation
},
author = {
Chen, Qingyu and Allot, Alexis and Leaman, Robert and Do{\u{g}}an, Rezarta
Islamaj and Lu, Zhiyong
},
year = 2021,
booktitle = {Proceedings of the seventh BioCreative challenge evaluation workshop}
}
```
| bigbio/bc7_litcovid | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:06:17+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "BC7-LitCovid", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vii/track-5/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:43:23+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for BC7-LitCovid
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: TXTCLASS
The training and development datasets contain the publicly-available text of over 30 thousand COVID-19-related articles and their metadata (e.g., title, abstract, journal). Articles in both datasets have been manually reviewed and articles annotated by in-house models.
| [
"# Dataset Card for BC7-LitCovid",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: TXTCLASS\n\n\nThe training and development datasets contain the publicly-available text of over 30 thousand COVID-19-related articles and their metadata (e.g., title, abstract, journal). Articles in both datasets have been manually reviewed and articles annotated by in-house models."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for BC7-LitCovid",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: TXTCLASS\n\n\nThe training and development datasets contain the publicly-available text of over 30 thousand COVID-19-related articles and their metadata (e.g., title, abstract, journal). Articles in both datasets have been manually reviewed and articles annotated by in-house models."
] |
d3036fd1137e1e84913dfd9c7d288285c99b54f7 |
# Dataset Card for Bio-SimVerb
## Dataset Description
- **Homepage:** https://github.com/cambridgeltl/bio-simverb
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
This repository contains the evaluation datasets for the paper Bio-SimVerb and Bio-SimLex: Wide-coverage Evaluation Sets of Word Similarity in Biomedicine by Billy Chiu, Sampo Pyysalo and Anna Korhonen.
## Citation Information
```
@article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
}
```
| bigbio/bio_sim_verb | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:06:20+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "Bio-SimVerb", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/cambridgeltl/bio-simverb", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]} | 2022-12-22T15:43:25+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for Bio-SimVerb
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: STS
This repository contains the evaluation datasets for the paper Bio-SimVerb and Bio-SimLex: Wide-coverage Evaluation Sets of Word Similarity in Biomedicine by Billy Chiu, Sampo Pyysalo and Anna Korhonen.
| [
"# Dataset Card for Bio-SimVerb",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: STS\n\n\n\nThis repository contains the evaluation datasets for the paper Bio-SimVerb and Bio-SimLex: Wide-coverage Evaluation Sets of Word Similarity in Biomedicine by Billy Chiu, Sampo Pyysalo and Anna Korhonen."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for Bio-SimVerb",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: STS\n\n\n\nThis repository contains the evaluation datasets for the paper Bio-SimVerb and Bio-SimLex: Wide-coverage Evaluation Sets of Word Similarity in Biomedicine by Billy Chiu, Sampo Pyysalo and Anna Korhonen."
] |
afeb2654ea0d2bb3b60e56e21c55448f4673ef4f |
# Dataset Card for Bio-SimLex
## Dataset Description
- **Homepage:** https://github.com/cambridgeltl/bio-simverb
- **Pubmed:** True
- **Public:** True
- **Tasks:** STS
Bio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs).
## Citation Information
```
@article{article,
title = {
Bio-SimVerb and Bio-SimLex: Wide-coverage evaluation sets of word
similarity in biomedicine
},
author = {Chiu, Billy and Pyysalo, Sampo and Vulić, Ivan and Korhonen, Anna},
year = 2018,
month = {02},
journal = {BMC Bioinformatics},
volume = 19,
pages = {},
doi = {10.1186/s12859-018-2039-z}
}
```
| bigbio/bio_simlex | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:06:24+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "Bio-SimLex", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/cambridgeltl/bio-simverb", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]} | 2022-12-22T15:43:27+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for Bio-SimLex
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: STS
Bio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs).
| [
"# Dataset Card for Bio-SimLex",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: STS\n\n\nBio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs)."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for Bio-SimLex",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: STS\n\n\nBio-SimLex enables intrinsic evaluation of word representations. This evaluation can serve as a predictor of performance on various downstream tasks in the biomedical domain. The results on Bio-SimLex using standard word representation models highlight the importance of developing dedicated evaluation resources for NLP in biomedicine for particular word classes (e.g. verbs)."
] |
16ef8e93b4fce968fa18d7d189353650d8ef2975 |
# Dataset Card for MESINESP 2021
## Dataset Description
- **Homepage:** https://zenodo.org/record/5602914#.YhSXJ5PMKWt
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
The main aim of MESINESP2 is to promote the development of practically relevant semantic indexing tools for biomedical content in non-English language. We have generated a manually annotated corpus, where domain experts have labeled a set of scientific literature, clinical trials, and patent abstracts. All the documents were labeled with DeCS descriptors, which is a structured controlled vocabulary created by BIREME to index scientific publications on BvSalud, the largest database of scientific documents in Spanish, which hosts records from the databases LILACS, MEDLINE, IBECS, among others.
MESINESP track at BioASQ9 explores the efficiency of systems for assigning DeCS to different types of biomedical documents. To that purpose, we have divided the task into three subtracks depending on the document type. Then, for each one we generated an annotated corpus which was provided to participating teams:
- [Subtrack 1 corpus] MESINESP-L – Scientific Literature: It contains all Spanish records from LILACS and IBECS databases at the Virtual Health Library (VHL) with non-empty abstract written in Spanish.
- [Subtrack 2 corpus] MESINESP-T- Clinical Trials contains records from Registro Español de Estudios Clínicos (REEC). REEC doesn't provide documents with the structure title/abstract needed in BioASQ, for that reason we have built artificial abstracts based on the content available in the data crawled using the REEC API.
- [Subtrack 3 corpus] MESINESP-P – Patents: This corpus includes patents in Spanish extracted from Google Patents which have the IPC code “A61P” and “A61K31”. In addition, we also provide a set of complementary data such as: the DeCS terminology file, a silver standard with the participants' predictions to the task background set and the entities of medications, diseases, symptoms and medical procedures extracted from the BSC NERs documents.
## Citation Information
```
@conference {396,
title = {Overview of BioASQ 2021-MESINESP track. Evaluation of
advance hierarchical classification techniques for scientific
literature, patents and clinical trials.},
booktitle = {Proceedings of the 9th BioASQ Workshop
A challenge on large-scale biomedical semantic indexing
and question answering},
year = {2021},
url = {http://ceur-ws.org/Vol-2936/paper-11.pdf},
author = {Gasco, Luis and Nentidis, Anastasios and Krithara, Anastasia
and Estrada-Zavala, Darryl and Toshiyuki Murasaki, Renato and Primo-Pe{\~n}a,
Elena and Bojo-Canales, Cristina and Paliouras, Georgios and Krallinger, Martin}
}
```
| bigbio/bioasq_2021_mesinesp | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:06:28+00:00 | {"language": ["es"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "MESINESP 2021", "bigbio_language": ["Spanish"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://zenodo.org/record/5602914#.YhSXJ5PMKWt", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:43:30+00:00 | [] | [
"es"
] | TAGS
#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us
|
# Dataset Card for MESINESP 2021
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TXTCLASS
The main aim of MESINESP2 is to promote the development of practically relevant semantic indexing tools for biomedical content in non-English language. We have generated a manually annotated corpus, where domain experts have labeled a set of scientific literature, clinical trials, and patent abstracts. All the documents were labeled with DeCS descriptors, which is a structured controlled vocabulary created by BIREME to index scientific publications on BvSalud, the largest database of scientific documents in Spanish, which hosts records from the databases LILACS, MEDLINE, IBECS, among others.
MESINESP track at BioASQ9 explores the efficiency of systems for assigning DeCS to different types of biomedical documents. To that purpose, we have divided the task into three subtracks depending on the document type. Then, for each one we generated an annotated corpus which was provided to participating teams:
- [Subtrack 1 corpus] MESINESP-L – Scientific Literature: It contains all Spanish records from LILACS and IBECS databases at the Virtual Health Library (VHL) with non-empty abstract written in Spanish.
- [Subtrack 2 corpus] MESINESP-T- Clinical Trials contains records from Registro Español de Estudios Clínicos (REEC). REEC doesn't provide documents with the structure title/abstract needed in BioASQ, for that reason we have built artificial abstracts based on the content available in the data crawled using the REEC API.
- [Subtrack 3 corpus] MESINESP-P – Patents: This corpus includes patents in Spanish extracted from Google Patents which have the IPC code “A61P” and “A61K31”. In addition, we also provide a set of complementary data such as: the DeCS terminology file, a silver standard with the participants' predictions to the task background set and the entities of medications, diseases, symptoms and medical procedures extracted from the BSC NERs documents.
| [
"# Dataset Card for MESINESP 2021",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\nThe main aim of MESINESP2 is to promote the development of practically relevant semantic indexing tools for biomedical content in non-English language. We have generated a manually annotated corpus, where domain experts have labeled a set of scientific literature, clinical trials, and patent abstracts. All the documents were labeled with DeCS descriptors, which is a structured controlled vocabulary created by BIREME to index scientific publications on BvSalud, the largest database of scientific documents in Spanish, which hosts records from the databases LILACS, MEDLINE, IBECS, among others.\n\nMESINESP track at BioASQ9 explores the efficiency of systems for assigning DeCS to different types of biomedical documents. To that purpose, we have divided the task into three subtracks depending on the document type. Then, for each one we generated an annotated corpus which was provided to participating teams:\n\n- [Subtrack 1 corpus] MESINESP-L – Scientific Literature: It contains all Spanish records from LILACS and IBECS databases at the Virtual Health Library (VHL) with non-empty abstract written in Spanish.\n- [Subtrack 2 corpus] MESINESP-T- Clinical Trials contains records from Registro Español de Estudios Clínicos (REEC). REEC doesn't provide documents with the structure title/abstract needed in BioASQ, for that reason we have built artificial abstracts based on the content available in the data crawled using the REEC API.\n- [Subtrack 3 corpus] MESINESP-P – Patents: This corpus includes patents in Spanish extracted from Google Patents which have the IPC code “A61P” and “A61K31”. In addition, we also provide a set of complementary data such as: the DeCS terminology file, a silver standard with the participants' predictions to the task background set and the entities of medications, diseases, symptoms and medical procedures extracted from the BSC NERs documents."
] | [
"TAGS\n#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us \n",
"# Dataset Card for MESINESP 2021",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\nThe main aim of MESINESP2 is to promote the development of practically relevant semantic indexing tools for biomedical content in non-English language. We have generated a manually annotated corpus, where domain experts have labeled a set of scientific literature, clinical trials, and patent abstracts. All the documents were labeled with DeCS descriptors, which is a structured controlled vocabulary created by BIREME to index scientific publications on BvSalud, the largest database of scientific documents in Spanish, which hosts records from the databases LILACS, MEDLINE, IBECS, among others.\n\nMESINESP track at BioASQ9 explores the efficiency of systems for assigning DeCS to different types of biomedical documents. To that purpose, we have divided the task into three subtracks depending on the document type. Then, for each one we generated an annotated corpus which was provided to participating teams:\n\n- [Subtrack 1 corpus] MESINESP-L – Scientific Literature: It contains all Spanish records from LILACS and IBECS databases at the Virtual Health Library (VHL) with non-empty abstract written in Spanish.\n- [Subtrack 2 corpus] MESINESP-T- Clinical Trials contains records from Registro Español de Estudios Clínicos (REEC). REEC doesn't provide documents with the structure title/abstract needed in BioASQ, for that reason we have built artificial abstracts based on the content available in the data crawled using the REEC API.\n- [Subtrack 3 corpus] MESINESP-P – Patents: This corpus includes patents in Spanish extracted from Google Patents which have the IPC code “A61P” and “A61K31”. In addition, we also provide a set of complementary data such as: the DeCS terminology file, a silver standard with the participants' predictions to the task background set and the entities of medications, diseases, symptoms and medical procedures extracted from the BSC NERs documents."
] |
85042e23931b6ad55b381e9eee785873a5966169 |
# Dataset Card for BioASQ Task C 2017
## Dataset Description
- **Homepage:** http://participants-area.bioasq.org/general_information/Task5c/
- **Pubmed:** True
- **Public:** False
- **Tasks:** TXTCLASS
The training data set for this task contains annotated biomedical articles
published in PubMed and corresponding full text from PMC. By annotated is meant
that GrantIDs and corresponding Grant Agencies have been identified in the full
text of articles
## Citation Information
```
@article{nentidis-etal-2017-results,
title = {Results of the fifth edition of the {B}io{ASQ} Challenge},
author = {
Nentidis, Anastasios and Bougiatiotis, Konstantinos and Krithara,
Anastasia and Paliouras, Georgios and Kakadiaris, Ioannis
},
year = 2007,
journal = {},
volume = {BioNLP 2017},
doi = {10.18653/v1/W17-2306},
url = {https://aclanthology.org/W17-2306},
biburl = {},
bibsource = {https://aclanthology.org/W17-2306}
}
```
| bigbio/bioasq_task_c_2017 | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:06:31+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioASQ Task C 2017", "bigbio_language": ["English"], "bigbio_license_shortname": "NLM_LICENSE", "homepage": "http://participants-area.bioasq.org/general_information/Task5c/", "bigbio_pubmed": true, "bigbio_public": false, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:43:32+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BioASQ Task C 2017
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: False
- Tasks: TXTCLASS
The training data set for this task contains annotated biomedical articles
published in PubMed and corresponding full text from PMC. By annotated is meant
that GrantIDs and corresponding Grant Agencies have been identified in the full
text of articles
| [
"# Dataset Card for BioASQ Task C 2017",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: TXTCLASS\n\n\nThe training data set for this task contains annotated biomedical articles\npublished in PubMed and corresponding full text from PMC. By annotated is meant\nthat GrantIDs and corresponding Grant Agencies have been identified in the full\ntext of articles"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BioASQ Task C 2017",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: False\n- Tasks: TXTCLASS\n\n\nThe training data set for this task contains annotated biomedical articles\npublished in PubMed and corresponding full text from PMC. By annotated is meant\nthat GrantIDs and corresponding Grant Agencies have been identified in the full\ntext of articles"
] |
7a5de43c8723091a36f549f8569f778fd86560b1 |
# Dataset Card for BioInfer
## Dataset Description
- **Homepage:** https://github.com/metalrt/ppi-dataset
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE,NER
A corpus targeted at protein, gene, and RNA relationships which serves as a
resource for the development of information extraction systems and their
components such as parsers and domain analyzers. Currently, the corpus contains
1100 sentences from abstracts of biomedical research articles annotated for
relationships, named entities, as well as syntactic dependencies.
## Citation Information
```
@article{pyysalo2007bioinfer,
title = {BioInfer: a corpus for information extraction in the biomedical domain},
author = {
Pyysalo, Sampo and Ginter, Filip and Heimonen, Juho and Bj{"o}rne, Jari
and Boberg, Jorma and J{"a}rvinen, Jouni and Salakoski, Tapio
},
year = 2007,
journal = {BMC bioinformatics},
publisher = {BioMed Central},
volume = 8,
number = 1,
pages = {1--24}
}
```
| bigbio/bioinfer | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-2.0",
"region:us"
] | 2022-11-13T22:06:35+00:00 | {"language": ["en"], "license": "cc-by-2.0", "multilinguality": "monolingual", "pretty_name": "BioInfer", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_2p0", "homepage": "https://github.com/metalrt/ppi-dataset", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["RELATION_EXTRACTION", "NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:43:38+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-2.0 #region-us
|
# Dataset Card for BioInfer
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: RE,NER
A corpus targeted at protein, gene, and RNA relationships which serves as a
resource for the development of information extraction systems and their
components such as parsers and domain analyzers. Currently, the corpus contains
1100 sentences from abstracts of biomedical research articles annotated for
relationships, named entities, as well as syntactic dependencies.
| [
"# Dataset Card for BioInfer",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE,NER\n\n\nA corpus targeted at protein, gene, and RNA relationships which serves as a\nresource for the development of information extraction systems and their\ncomponents such as parsers and domain analyzers. Currently, the corpus contains\n1100 sentences from abstracts of biomedical research articles annotated for\nrelationships, named entities, as well as syntactic dependencies."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-2.0 #region-us \n",
"# Dataset Card for BioInfer",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE,NER\n\n\nA corpus targeted at protein, gene, and RNA relationships which serves as a\nresource for the development of information extraction systems and their\ncomponents such as parsers and domain analyzers. Currently, the corpus contains\n1100 sentences from abstracts of biomedical research articles annotated for\nrelationships, named entities, as well as syntactic dependencies."
] |
95cbfef08fc668490f91fba0d4163a7a4eb1ed7d |
# Dataset Card for BiologyHowWhyCorpus
## Dataset Description
- **Homepage:** https://allenai.org/data/biology-how-why-corpus
- **Pubmed:** False
- **Public:** True
- **Tasks:** QA
This dataset consists of 185 "how" and 193 "why" biology questions authored by a domain expert, with one or more gold
answer passages identified in an undergraduate textbook. The expert was not constrained in any way during the
annotation process, so gold answers might be smaller than a paragraph or span multiple paragraphs. This dataset was
used for the question-answering system described in the paper “Discourse Complements Lexical Semantics for Non-factoid
Answer Reranking” (ACL 2014).
## Citation Information
```
@inproceedings{jansen-etal-2014-discourse,
title = "Discourse Complements Lexical Semantics for Non-factoid Answer Reranking",
author = "Jansen, Peter and
Surdeanu, Mihai and
Clark, Peter",
booktitle = "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jun,
year = "2014",
address = "Baltimore, Maryland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P14-1092",
doi = "10.3115/v1/P14-1092",
pages = "977--986",
}
```
| bigbio/biology_how_why_corpus | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:06:38+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "BiologyHowWhyCorpus", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://allenai.org/data/biology-how-why-corpus", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["QUESTION_ANSWERING"]} | 2022-12-22T15:43:41+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for BiologyHowWhyCorpus
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: QA
This dataset consists of 185 "how" and 193 "why" biology questions authored by a domain expert, with one or more gold
answer passages identified in an undergraduate textbook. The expert was not constrained in any way during the
annotation process, so gold answers might be smaller than a paragraph or span multiple paragraphs. This dataset was
used for the question-answering system described in the paper “Discourse Complements Lexical Semantics for Non-factoid
Answer Reranking” (ACL 2014).
| [
"# Dataset Card for BiologyHowWhyCorpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: QA\n\n\nThis dataset consists of 185 \"how\" and 193 \"why\" biology questions authored by a domain expert, with one or more gold \nanswer passages identified in an undergraduate textbook. The expert was not constrained in any way during the \nannotation process, so gold answers might be smaller than a paragraph or span multiple paragraphs. This dataset was \nused for the question-answering system described in the paper “Discourse Complements Lexical Semantics for Non-factoid \nAnswer Reranking” (ACL 2014)."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for BiologyHowWhyCorpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: QA\n\n\nThis dataset consists of 185 \"how\" and 193 \"why\" biology questions authored by a domain expert, with one or more gold \nanswer passages identified in an undergraduate textbook. The expert was not constrained in any way during the \nannotation process, so gold answers might be smaller than a paragraph or span multiple paragraphs. This dataset was \nused for the question-answering system described in the paper “Discourse Complements Lexical Semantics for Non-factoid \nAnswer Reranking” (ACL 2014)."
] |
09938b2d42d6e4ecd4d5282657d0bb5f8791950b |
# Dataset Card for BIOMRC
## Dataset Description
- **Homepage:** https://github.com/PetrosStav/BioMRC_code
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the
previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the
new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating
that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is
also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new
BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or
surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different
sizes, also releasing our code, and providing a leaderboard.
## Citation Information
```
@inproceedings{pappas-etal-2020-biomrc,
title = "{B}io{MRC}: A Dataset for Biomedical Machine Reading Comprehension",
author = "Pappas, Dimitris and
Stavropoulos, Petros and
Androutsopoulos, Ion and
McDonald, Ryan",
booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.bionlp-1.15",
pages = "140--149",
}
```
| bigbio/biomrc | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:06:42+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "BIOMRC", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/PetrosStav/BioMRC_code", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["QUESTION_ANSWERING"]} | 2022-12-22T15:43:44+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for BIOMRC
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: QA
We introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the
previous BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the
new dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating
that the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is
also higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new
BERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or
surpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different
sizes, also releasing our code, and providing a leaderboard.
| [
"# Dataset Card for BIOMRC",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: QA\n\n\nWe introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the\nprevious BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the\nnew dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating\nthat the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is\nalso higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new\nBERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or\nsurpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different\nsizes, also releasing our code, and providing a leaderboard."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for BIOMRC",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: QA\n\n\nWe introduce BIOMRC, a large-scale cloze-style biomedical MRC dataset. Care was taken to reduce noise, compared to the\nprevious BIOREAD dataset of Pappas et al. (2018). Experiments show that simple heuristics do not perform well on the\nnew dataset and that two neural MRC models that had been tested on BIOREAD perform much better on BIOMRC, indicating\nthat the new dataset is indeed less noisy or at least that its task is more feasible. Non-expert human performance is\nalso higher on the new dataset compared to BIOREAD, and biomedical experts perform even better. We also introduce a new\nBERT-based MRC model, the best version of which substantially outperforms all other methods tested, reaching or\nsurpassing the accuracy of biomedical experts in some experiments. We make the new dataset available in three different\nsizes, also releasing our code, and providing a leaderboard."
] |
6482ae67be1476804b35ca417915952d7806852a |
# Dataset Card for BioNLP 2009
## Dataset Description
- **Homepage:** http://www.geniaproject.org/shared-tasks/bionlp-shared-task-2009
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,EE,COREF
The BioNLP Shared Task 2009 was organized by GENIA Project and its corpora were curated based
on the annotations of the publicly available GENIA Event corpus and an unreleased (blind) section
of the GENIA Event corpus annotations, used for evaluation.
## Citation Information
```
@inproceedings{kim-etal-2009-overview,
title = "Overview of {B}io{NLP}{'}09 Shared Task on Event Extraction",
author = "Kim, Jin-Dong and
Ohta, Tomoko and
Pyysalo, Sampo and
Kano, Yoshinobu and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop Companion Volume for Shared Task",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1401",
pages = "1--9",
}
```
| bigbio/bionlp_shared_task_2009 | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:06:45+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioNLP 2009", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "http://www.geniaproject.org/shared-tasks/bionlp-shared-task-2009", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "EVENT_EXTRACTION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:43:48+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BioNLP 2009
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,EE,COREF
The BioNLP Shared Task 2009 was organized by GENIA Project and its corpora were curated based
on the annotations of the publicly available GENIA Event corpus and an unreleased (blind) section
of the GENIA Event corpus annotations, used for evaluation.
| [
"# Dataset Card for BioNLP 2009",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,EE,COREF\n\n\nThe BioNLP Shared Task 2009 was organized by GENIA Project and its corpora were curated based\non the annotations of the publicly available GENIA Event corpus and an unreleased (blind) section\nof the GENIA Event corpus annotations, used for evaluation."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BioNLP 2009",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,EE,COREF\n\n\nThe BioNLP Shared Task 2009 was organized by GENIA Project and its corpora were curated based\non the annotations of the publicly available GENIA Event corpus and an unreleased (blind) section\nof the GENIA Event corpus annotations, used for evaluation."
] |
e70d21156918fd00eed1f61f6cdb19c91f258043 |
# Dataset Card for BioNLP 2011 EPI
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2011-epi
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,COREF
The dataset of the Epigenetics and Post-translational Modifications (EPI) task
of BioNLP Shared Task 2011.
## Citation Information
```
@inproceedings{ohta-etal-2011-overview,
title = "Overview of the Epigenetics and Post-translational
Modifications ({EPI}) task of {B}io{NLP} Shared Task 2011",
author = "Ohta, Tomoko and
Pyysalo, Sampo and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of {B}io{NLP} Shared Task 2011 Workshop",
month = jun,
year = "2011",
address = "Portland, Oregon, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W11-1803",
pages = "16--25",
}
```
| bigbio/bionlp_st_2011_epi | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:06:49+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioNLP 2011 EPI", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "https://github.com/openbiocorpora/bionlp-st-2011-epi", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["EVENT_EXTRACTION", "NAMED_ENTITY_RECOGNITION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:43:49+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BioNLP 2011 EPI
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: EE,NER,COREF
The dataset of the Epigenetics and Post-translational Modifications (EPI) task
of BioNLP Shared Task 2011.
| [
"# Dataset Card for BioNLP 2011 EPI",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,COREF\n\n\nThe dataset of the Epigenetics and Post-translational Modifications (EPI) task\nof BioNLP Shared Task 2011."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BioNLP 2011 EPI",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,COREF\n\n\nThe dataset of the Epigenetics and Post-translational Modifications (EPI) task\nof BioNLP Shared Task 2011."
] |
d390829665dd6f5038d1073ecdd10d32c50544ff |
# Dataset Card for BioNLP 2011 GE
## Dataset Description
- **Homepage:** https://sites.google.com/site/bionlpst/bionlp-shared-task-2011/genia-event-extraction-genia
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,COREF
The BioNLP-ST GE task has been promoting development of fine-grained information extraction (IE) from biomedical
documents, since 2009. Particularly, it has focused on the domain of NFkB as a model domain of Biomedical IE.
The GENIA task aims at extracting events occurring upon genes or gene products, which are typed as "Protein"
without differentiating genes from gene products. Other types of physical entities, e.g. cells, cell components,
are not differentiated from each other, and their type is given as "Entity".
## Citation Information
```
@inproceedings{10.5555/2107691.2107693,
author = {Kim, Jin-Dong and Wang, Yue and Takagi, Toshihisa and Yonezawa, Akinori},
title = {Overview of Genia Event Task in BioNLP Shared Task 2011},
year = {2011},
isbn = {9781937284091},
publisher = {Association for Computational Linguistics},
address = {USA},
abstract = {The Genia event task, a bio-molecular event extraction task,
is arranged as one of the main tasks of BioNLP Shared Task 2011.
As its second time to be arranged for community-wide focused
efforts, it aimed to measure the advance of the community since 2009,
and to evaluate generalization of the technology to full text papers.
After a 3-month system development period, 15 teams submitted their
performance results on test cases. The results show the community has
made a significant advancement in terms of both performance improvement
and generalization.},
booktitle = {Proceedings of the BioNLP Shared Task 2011 Workshop},
pages = {7–15},
numpages = {9},
location = {Portland, Oregon},
series = {BioNLP Shared Task '11}
}
```
| bigbio/bionlp_st_2011_ge | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-3.0",
"region:us"
] | 2022-11-13T22:06:52+00:00 | {"language": ["en"], "license": "cc-by-3.0", "multilinguality": "monolingual", "pretty_name": "BioNLP 2011 GE", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_3p0", "homepage": "https://sites.google.com/site/bionlpst/bionlp-shared-task-2011/genia-event-extraction-genia", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["EVENT_EXTRACTION", "NAMED_ENTITY_RECOGNITION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:43:51+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-3.0 #region-us
|
# Dataset Card for BioNLP 2011 GE
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: EE,NER,COREF
The BioNLP-ST GE task has been promoting development of fine-grained information extraction (IE) from biomedical
documents, since 2009. Particularly, it has focused on the domain of NFkB as a model domain of Biomedical IE.
The GENIA task aims at extracting events occurring upon genes or gene products, which are typed as "Protein"
without differentiating genes from gene products. Other types of physical entities, e.g. cells, cell components,
are not differentiated from each other, and their type is given as "Entity".
| [
"# Dataset Card for BioNLP 2011 GE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,COREF\n\n\nThe BioNLP-ST GE task has been promoting development of fine-grained information extraction (IE) from biomedical\ndocuments, since 2009. Particularly, it has focused on the domain of NFkB as a model domain of Biomedical IE.\nThe GENIA task aims at extracting events occurring upon genes or gene products, which are typed as \"Protein\"\nwithout differentiating genes from gene products. Other types of physical entities, e.g. cells, cell components,\nare not differentiated from each other, and their type is given as \"Entity\"."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-3.0 #region-us \n",
"# Dataset Card for BioNLP 2011 GE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,COREF\n\n\nThe BioNLP-ST GE task has been promoting development of fine-grained information extraction (IE) from biomedical\ndocuments, since 2009. Particularly, it has focused on the domain of NFkB as a model domain of Biomedical IE.\nThe GENIA task aims at extracting events occurring upon genes or gene products, which are typed as \"Protein\"\nwithout differentiating genes from gene products. Other types of physical entities, e.g. cells, cell components,\nare not differentiated from each other, and their type is given as \"Entity\"."
] |
8687f2eb6ea7c3291ca64e38280c09bc81a8c644 |
# Dataset Card for BioNLP 2011 ID
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2011-id
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,COREF,NER
The dataset of the Infectious Diseases (ID) task of
BioNLP Shared Task 2011.
## Citation Information
```
@inproceedings{pyysalo-etal-2011-overview,
title = "Overview of the Infectious Diseases ({ID}) task of {B}io{NLP} Shared Task 2011",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Rak, Rafal and
Sullivan, Dan and
Mao, Chunhong and
Wang, Chunxia and
Sobral, Bruno and
Tsujii, Jun{'}ichi and
Ananiadou, Sophia",
booktitle = "Proceedings of {B}io{NLP} Shared Task 2011 Workshop",
month = jun,
year = "2011",
address = "Portland, Oregon, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W11-1804",
pages = "26--35",
}
```
| bigbio/bionlp_st_2011_id | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:06:56+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioNLP 2011 ID", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "https://github.com/openbiocorpora/bionlp-st-2011-id", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["EVENT_EXTRACTION", "COREFERENCE_RESOLUTION", "NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:43:52+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BioNLP 2011 ID
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: EE,COREF,NER
The dataset of the Infectious Diseases (ID) task of
BioNLP Shared Task 2011.
| [
"# Dataset Card for BioNLP 2011 ID",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,COREF,NER\n\n\nThe dataset of the Infectious Diseases (ID) task of\nBioNLP Shared Task 2011."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BioNLP 2011 ID",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,COREF,NER\n\n\nThe dataset of the Infectious Diseases (ID) task of\nBioNLP Shared Task 2011."
] |
f0e67fb59c3ef293c2bae36b5524d7825f22f815 |
# Dataset Card for BioNLP 2011 REL
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2011-rel
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE,COREF
The Entity Relations (REL) task is a supporting task of the BioNLP Shared Task 2011.
The task concerns the extraction of two types of part-of relations between a
gene/protein and an associated entity.
## Citation Information
```
@inproceedings{10.5555/2107691.2107703,
author = {Pyysalo, Sampo and Ohta, Tomoko and Tsujii, Jun'ichi},
title = {Overview of the Entity Relations (REL) Supporting Task of BioNLP Shared Task 2011},
year = {2011},
isbn = {9781937284091},
publisher = {Association for Computational Linguistics},
address = {USA},
abstract = {This paper presents the Entity Relations (REL) task,
a supporting task of the BioNLP Shared Task 2011. The task concerns
the extraction of two types of part-of relations between a gene/protein
and an associated entity. Four teams submitted final results for
the REL task, with the highest-performing system achieving 57.7%
F-score. While experiments suggest use of the data can help improve
event extraction performance, the task data has so far received only
limited use in support of event extraction. The REL task continues
as an open challenge, with all resources available from the shared
task website.},
booktitle = {Proceedings of the BioNLP Shared Task 2011 Workshop},
pages = {83–88},
numpages = {6},
location = {Portland, Oregon},
series = {BioNLP Shared Task '11}
}
```
| bigbio/bionlp_st_2011_rel | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:06:59+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioNLP 2011 REL", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "https://github.com/openbiocorpora/bionlp-st-2011-rel", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:43:54+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BioNLP 2011 REL
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,RE,COREF
The Entity Relations (REL) task is a supporting task of the BioNLP Shared Task 2011.
The task concerns the extraction of two types of part-of relations between a
gene/protein and an associated entity.
| [
"# Dataset Card for BioNLP 2011 REL",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE,COREF\n\n\nThe Entity Relations (REL) task is a supporting task of the BioNLP Shared Task 2011.\nThe task concerns the extraction of two types of part-of relations between a\ngene/protein and an associated entity."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BioNLP 2011 REL",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE,COREF\n\n\nThe Entity Relations (REL) task is a supporting task of the BioNLP Shared Task 2011.\nThe task concerns the extraction of two types of part-of relations between a\ngene/protein and an associated entity."
] |
0a65caf6fb10c3bd6a0d032546109ce3ce3fb1ba |
# Dataset Card for BioNLP 2013 CG
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-cg
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,COREF
the Cancer Genetics (CG) is a event extraction task and a main task of the BioNLP Shared Task (ST) 2013.
The CG task is an information extraction task targeting the recognition of events in text,
represented as structured n-ary associations of given physical entities. In addition to
addressing the cancer domain, the CG task is differentiated from previous event extraction
tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple
levels of biological organization, ranging from the molecular through the cellular and organ
levels up to whole organisms. Final test set submissions were accepted from six teams
## Citation Information
```
@inproceedings{pyysalo-etal-2013-overview,
title = "Overview of the Cancer Genetics ({CG}) task of {B}io{NLP} Shared Task 2013",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Ananiadou, Sophia",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2008",
pages = "58--66",
}
```
| bigbio/bionlp_st_2013_cg | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:07:03+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioNLP 2013 CG", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "https://github.com/openbiocorpora/bionlp-st-2013-cg", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["EVENT_EXTRACTION", "NAMED_ENTITY_RECOGNITION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:43:57+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BioNLP 2013 CG
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: EE,NER,COREF
the Cancer Genetics (CG) is a event extraction task and a main task of the BioNLP Shared Task (ST) 2013.
The CG task is an information extraction task targeting the recognition of events in text,
represented as structured n-ary associations of given physical entities. In addition to
addressing the cancer domain, the CG task is differentiated from previous event extraction
tasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple
levels of biological organization, ranging from the molecular through the cellular and organ
levels up to whole organisms. Final test set submissions were accepted from six teams
| [
"# Dataset Card for BioNLP 2013 CG",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,COREF\n\n\nthe Cancer Genetics (CG) is a event extraction task and a main task of the BioNLP Shared Task (ST) 2013.\nThe CG task is an information extraction task targeting the recognition of events in text,\nrepresented as structured n-ary associations of given physical entities. In addition to\naddressing the cancer domain, the CG task is differentiated from previous event extraction\ntasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple\nlevels of biological organization, ranging from the molecular through the cellular and organ\nlevels up to whole organisms. Final test set submissions were accepted from six teams"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BioNLP 2013 CG",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,COREF\n\n\nthe Cancer Genetics (CG) is a event extraction task and a main task of the BioNLP Shared Task (ST) 2013.\nThe CG task is an information extraction task targeting the recognition of events in text,\nrepresented as structured n-ary associations of given physical entities. In addition to\naddressing the cancer domain, the CG task is differentiated from previous event extraction\ntasks in the BioNLP ST series in addressing a wide range of pathological processes and multiple\nlevels of biological organization, ranging from the molecular through the cellular and organ\nlevels up to whole organisms. Final test set submissions were accepted from six teams"
] |
6ffdbf6b70599d3613e8122ce6a459d51e5de61a |
# Dataset Card for BioNLP 2013 GE
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-ge
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,RE,COREF
The BioNLP-ST GE task has been promoting development of fine-grained
information extraction (IE) from biomedical
documents, since 2009. Particularly, it has focused on the domain of
NFkB as a model domain of Biomedical IE
## Citation Information
```
@inproceedings{kim-etal-2013-genia,
title = "The {G}enia Event Extraction Shared Task, 2013 Edition - Overview",
author = "Kim, Jin-Dong and
Wang, Yue and
Yasunori, Yamamoto",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2002",
pages = "8--15",
}
```
| bigbio/bionlp_st_2013_ge | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:07:06+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioNLP 2013 GE", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "https://github.com/openbiocorpora/bionlp-st-2013-ge", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["EVENT_EXTRACTION", "NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:43:59+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BioNLP 2013 GE
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: EE,NER,RE,COREF
The BioNLP-ST GE task has been promoting development of fine-grained
information extraction (IE) from biomedical
documents, since 2009. Particularly, it has focused on the domain of
NFkB as a model domain of Biomedical IE
| [
"# Dataset Card for BioNLP 2013 GE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,RE,COREF\n\n\nThe BioNLP-ST GE task has been promoting development of fine-grained\ninformation extraction (IE) from biomedical\ndocuments, since 2009. Particularly, it has focused on the domain of\nNFkB as a model domain of Biomedical IE"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BioNLP 2013 GE",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,RE,COREF\n\n\nThe BioNLP-ST GE task has been promoting development of fine-grained\ninformation extraction (IE) from biomedical\ndocuments, since 2009. Particularly, it has focused on the domain of\nNFkB as a model domain of Biomedical IE"
] |
1f8f9ba929e063e4ca55c03667a875183d83f50a |
# Dataset Card for BioNLP 2013 GRO
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-gro
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,RE
GRO Task: Populating the Gene Regulation Ontology with events and
relations. A data set from the bio NLP shared tasks competition from 2013
## Citation Information
```
@inproceedings{kim-etal-2013-gro,
title = "{GRO} Task: Populating the Gene Regulation Ontology with events and relations",
author = "Kim, Jung-jae and
Han, Xu and
Lee, Vivian and
Rebholz-Schuhmann, Dietrich",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2007",
pages = "50--57",
}
```
| bigbio/bionlp_st_2013_gro | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:07:10+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioNLP 2013 GRO", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "https://github.com/openbiocorpora/bionlp-st-2013-gro", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["EVENT_EXTRACTION", "NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2022-12-22T15:44:01+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BioNLP 2013 GRO
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: EE,NER,RE
GRO Task: Populating the Gene Regulation Ontology with events and
relations. A data set from the bio NLP shared tasks competition from 2013
| [
"# Dataset Card for BioNLP 2013 GRO",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,RE\n\n\nGRO Task: Populating the Gene Regulation Ontology with events and\nrelations. A data set from the bio NLP shared tasks competition from 2013"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BioNLP 2013 GRO",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,RE\n\n\nGRO Task: Populating the Gene Regulation Ontology with events and\nrelations. A data set from the bio NLP shared tasks competition from 2013"
] |
57eacf12bd690e8490116a32d0463988c2313d47 |
# Dataset Card for BioNLP 2013 PC
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/bionlp-st-2013-pc
- **Pubmed:** True
- **Public:** True
- **Tasks:** EE,NER,COREF
the Pathway Curation (PC) task is a main event extraction task of the BioNLP shared task (ST) 2013.
The PC task concerns the automatic extraction of biomolecular reactions from text.
The task setting, representation and semantics are defined with respect to pathway
model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance
to specific model reactions. Two BioNLP ST 2013 participants successfully completed
the PC task. The highest achieved F-score, 52.8%, indicates that event extraction is
a promising approach to supporting pathway curation efforts.
## Citation Information
```
@inproceedings{ohta-etal-2013-overview,
title = "Overview of the Pathway Curation ({PC}) task of {B}io{NLP} Shared Task 2013",
author = "Ohta, Tomoko and
Pyysalo, Sampo and
Rak, Rafal and
Rowley, Andrew and
Chun, Hong-Woo and
Jung, Sung-Jae and
Choi, Sung-Pil and
Ananiadou, Sophia and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} Shared Task 2013 Workshop",
month = aug,
year = "2013",
address = "Sofia, Bulgaria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W13-2009",
pages = "67--75",
}
```
| bigbio/bionlp_st_2013_pc | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:07:14+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "BioNLP 2013 PC", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "https://github.com/openbiocorpora/bionlp-st-2013-pc", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["EVENT_EXTRACTION", "NAMED_ENTITY_RECOGNITION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:44:03+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for BioNLP 2013 PC
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: EE,NER,COREF
the Pathway Curation (PC) task is a main event extraction task of the BioNLP shared task (ST) 2013.
The PC task concerns the automatic extraction of biomolecular reactions from text.
The task setting, representation and semantics are defined with respect to pathway
model standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance
to specific model reactions. Two BioNLP ST 2013 participants successfully completed
the PC task. The highest achieved F-score, 52.8%, indicates that event extraction is
a promising approach to supporting pathway curation efforts.
| [
"# Dataset Card for BioNLP 2013 PC",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,COREF\n\n\nthe Pathway Curation (PC) task is a main event extraction task of the BioNLP shared task (ST) 2013.\nThe PC task concerns the automatic extraction of biomolecular reactions from text.\nThe task setting, representation and semantics are defined with respect to pathway\nmodel standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance\nto specific model reactions. Two BioNLP ST 2013 participants successfully completed\nthe PC task. The highest achieved F-score, 52.8%, indicates that event extraction is\na promising approach to supporting pathway curation efforts."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for BioNLP 2013 PC",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: EE,NER,COREF\n\n\nthe Pathway Curation (PC) task is a main event extraction task of the BioNLP shared task (ST) 2013.\nThe PC task concerns the automatic extraction of biomolecular reactions from text.\nThe task setting, representation and semantics are defined with respect to pathway\nmodel standards and ontologies (SBML, BioPAX, SBO) and documents selected by relevance\nto specific model reactions. Two BioNLP ST 2013 participants successfully completed\nthe PC task. The highest achieved F-score, 52.8%, indicates that event extraction is\na promising approach to supporting pathway curation efforts."
] |
3b213bd304cc94a73174215c8b1a4548606da669 |
# Dataset Card for BioNLP 2019 BB
## Dataset Description
- **Homepage:** https://sites.google.com/view/bb-2019/dataset
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE
The task focuses on the extraction of the locations and phenotypes of
microorganisms from PubMed abstracts and full-text excerpts, and the
characterization of these entities with respect to reference knowledge
sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by
the importance of the knowledge on biodiversity for fundamental research
and applications in microbiology.
## Citation Information
```
@inproceedings{bossy-etal-2019-bacteria,
title = "Bacteria Biotope at {B}io{NLP} Open Shared Tasks 2019",
author = "Bossy, Robert and
Del{'e}ger, Louise and
Chaix, Estelle and
Ba, Mouhamadou and
N{'e}dellec, Claire",
booktitle = "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D19-5719",
doi = "10.18653/v1/D19-5719",
pages = "121--131",
abstract = "This paper presents the fourth edition of the Bacteria
Biotope task at BioNLP Open Shared Tasks 2019. The task focuses on
the extraction of the locations and phenotypes of microorganisms
from PubMed abstracts and full-text excerpts, and the characterization
of these entities with respect to reference knowledge sources (NCBI
taxonomy, OntoBiotope ontology). The task is motivated by the importance
of the knowledge on biodiversity for fundamental research and applications
in microbiology. The paper describes the different proposed subtasks, the
corpus characteristics, and the challenge organization. We also provide an
analysis of the results obtained by participants, and inspect the evolution
of the results since the last edition in 2016.",
}
```
| bigbio/bionlp_st_2019_bb | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:07:17+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "BioNLP 2019 BB", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://sites.google.com/view/bb-2019/dataset", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION", "RELATION_EXTRACTION"]} | 2022-12-22T15:44:04+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for BioNLP 2019 BB
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED,RE
The task focuses on the extraction of the locations and phenotypes of
microorganisms from PubMed abstracts and full-text excerpts, and the
characterization of these entities with respect to reference knowledge
sources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by
the importance of the knowledge on biodiversity for fundamental research
and applications in microbiology.
| [
"# Dataset Card for BioNLP 2019 BB",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,RE\n\n\nThe task focuses on the extraction of the locations and phenotypes of\nmicroorganisms from PubMed abstracts and full-text excerpts, and the\ncharacterization of these entities with respect to reference knowledge\nsources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by\nthe importance of the knowledge on biodiversity for fundamental research\nand applications in microbiology."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for BioNLP 2019 BB",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,RE\n\n\nThe task focuses on the extraction of the locations and phenotypes of\nmicroorganisms from PubMed abstracts and full-text excerpts, and the\ncharacterization of these entities with respect to reference knowledge\nsources (NCBI taxonomy, OntoBiotope ontology). The task is motivated by\nthe importance of the knowledge on biodiversity for fundamental research\nand applications in microbiology."
] |
276d3b7a02894b41649dd62541c1418e49deb7ee |
# Dataset Card for BioRED
## Dataset Description
- **Homepage:** https://ftp.ncbi.nlm.nih.gov/pub/lu/BioRED/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE
Relation Extraction corpus with multiple entity types (e.g., gene/protein,
disease, chemical) and relation pairs (e.g., gene-disease; chemical-chemical),
on a set of 600 PubMed articles
## Citation Information
```
@article{DBLP:journals/corr/abs-2204-04263,
author = {Ling Luo and
Po{-}Ting Lai and
Chih{-}Hsuan Wei and
Cecilia N. Arighi and
Zhiyong Lu},
title = {BioRED: {A} Comprehensive Biomedical Relation Extraction Dataset},
journal = {CoRR},
volume = {abs/2204.04263},
year = {2022},
url = {https://doi.org/10.48550/arXiv.2204.04263},
doi = {10.48550/arXiv.2204.04263},
eprinttype = {arXiv},
eprint = {2204.04263},
timestamp = {Wed, 11 May 2022 15:24:37 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2204-04263.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| bigbio/biored | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"arxiv:2204.04263",
"region:us"
] | 2022-11-13T22:07:21+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "BioRED", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://ftp.ncbi.nlm.nih.gov/pub/lu/BioRED/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2023-01-12T05:54:49+00:00 | [
"2204.04263"
] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #arxiv-2204.04263 #region-us
|
# Dataset Card for BioRED
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,RE
Relation Extraction corpus with multiple entity types (e.g., gene/protein,
disease, chemical) and relation pairs (e.g., gene-disease; chemical-chemical),
on a set of 600 PubMed articles
| [
"# Dataset Card for BioRED",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nRelation Extraction corpus with multiple entity types (e.g., gene/protein,\ndisease, chemical) and relation pairs (e.g., gene-disease; chemical-chemical),\non a set of 600 PubMed articles"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #arxiv-2204.04263 #region-us \n",
"# Dataset Card for BioRED",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nRelation Extraction corpus with multiple entity types (e.g., gene/protein,\ndisease, chemical) and relation pairs (e.g., gene-disease; chemical-chemical),\non a set of 600 PubMed articles"
] |
1d8844cb1a265ef7037a65482d32b674ba8364fa |
# Dataset Card for BioRelEx
## Dataset Description
- **Homepage:** https://github.com/YerevaNN/BioRelEx
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED,RE,COREF
BioRelEx is a biological relation extraction dataset. Version 1.0 contains 2010
annotated sentences that describe binding interactions between various
biological entities (proteins, chemicals, etc.). 1405 sentences are for
training, another 201 sentences are for validation. They are publicly available
at https://github.com/YerevaNN/BioRelEx/releases. Another 404 sentences are for
testing which are kept private for at this Codalab competition
https://competitions.codalab.org/competitions/20468. All sentences contain words
"bind", "bound" or "binding". For every sentence we provide: 1) Complete
annotations of all biological entities that appear in the sentence 2) Entity
types (32 types) and grounding information for most of the proteins and families
(links to uniprot, interpro and other databases) 3) Coreference between entities
in the same sentence (e.g. abbreviations and synonyms) 4) Binding interactions
between the annotated entities 5) Binding interaction types: positive, negative
(A does not bind B) and neutral (A may bind to B)
## Citation Information
```
@inproceedings{khachatrian2019biorelex,
title = "{B}io{R}el{E}x 1.0: Biological Relation Extraction Benchmark",
author = "Khachatrian, Hrant and
Nersisyan, Lilit and
Hambardzumyan, Karen and
Galstyan, Tigran and
Hakobyan, Anna and
Arakelyan, Arsen and
Rzhetsky, Andrey and
Galstyan, Aram",
booktitle = "Proceedings of the 18th BioNLP Workshop and Shared Task",
month = aug,
year = "2019",
address = "Florence, Italy",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W19-5019",
doi = "10.18653/v1/W19-5019",
pages = "176--190"
}
```
| bigbio/biorelex | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:07:24+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "BioRelEx", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/YerevaNN/BioRelEx", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION", "RELATION_EXTRACTION", "COREFERENCE_RESOLUTION"]} | 2022-12-22T15:44:10+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for BioRelEx
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED,RE,COREF
BioRelEx is a biological relation extraction dataset. Version 1.0 contains 2010
annotated sentences that describe binding interactions between various
biological entities (proteins, chemicals, etc.). 1405 sentences are for
training, another 201 sentences are for validation. They are publicly available
at URL Another 404 sentences are for
testing which are kept private for at this Codalab competition
URL All sentences contain words
"bind", "bound" or "binding". For every sentence we provide: 1) Complete
annotations of all biological entities that appear in the sentence 2) Entity
types (32 types) and grounding information for most of the proteins and families
(links to uniprot, interpro and other databases) 3) Coreference between entities
in the same sentence (e.g. abbreviations and synonyms) 4) Binding interactions
between the annotated entities 5) Binding interaction types: positive, negative
(A does not bind B) and neutral (A may bind to B)
| [
"# Dataset Card for BioRelEx",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,RE,COREF\n\n\nBioRelEx is a biological relation extraction dataset. Version 1.0 contains 2010\nannotated sentences that describe binding interactions between various\nbiological entities (proteins, chemicals, etc.). 1405 sentences are for\ntraining, another 201 sentences are for validation. They are publicly available\nat URL Another 404 sentences are for\ntesting which are kept private for at this Codalab competition\nURL All sentences contain words\n\"bind\", \"bound\" or \"binding\". For every sentence we provide: 1) Complete\nannotations of all biological entities that appear in the sentence 2) Entity\ntypes (32 types) and grounding information for most of the proteins and families\n(links to uniprot, interpro and other databases) 3) Coreference between entities\nin the same sentence (e.g. abbreviations and synonyms) 4) Binding interactions\nbetween the annotated entities 5) Binding interaction types: positive, negative\n(A does not bind B) and neutral (A may bind to B)"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for BioRelEx",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED,RE,COREF\n\n\nBioRelEx is a biological relation extraction dataset. Version 1.0 contains 2010\nannotated sentences that describe binding interactions between various\nbiological entities (proteins, chemicals, etc.). 1405 sentences are for\ntraining, another 201 sentences are for validation. They are publicly available\nat URL Another 404 sentences are for\ntesting which are kept private for at this Codalab competition\nURL All sentences contain words\n\"bind\", \"bound\" or \"binding\". For every sentence we provide: 1) Complete\nannotations of all biological entities that appear in the sentence 2) Entity\ntypes (32 types) and grounding information for most of the proteins and families\n(links to uniprot, interpro and other databases) 3) Coreference between entities\nin the same sentence (e.g. abbreviations and synonyms) 4) Binding interactions\nbetween the annotated entities 5) Binding interaction types: positive, negative\n(A does not bind B) and neutral (A may bind to B)"
] |
43ba162359f01d6436590ba2a014421b6ad09621 |
# Dataset Card for BioScope
## Dataset Description
- **Homepage:** https://rgai.inf.u-szeged.hu/node/105
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The BioScope corpus consists of medical and biological texts annotated for
negation, speculation and their linguistic scope. This was done to allow a
comparison between the development of systems for negation/hedge detection and
scope resolution. The BioScope corpus was annotated by two independent linguists
following the guidelines written by our linguist expert before the annotation of
the corpus was initiated.
## Citation Information
```
@article{vincze2008bioscope,
title={The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes},
author={Vincze, Veronika and Szarvas, Gy{"o}rgy and Farkas, Rich{'a}rd and M{'o}ra, Gy{"o}rgy and Csirik, J{'a}nos},
journal={BMC bioinformatics},
volume={9},
number={11},
pages={1--9},
year={2008},
publisher={BioMed Central}
}
```
| bigbio/bioscope | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-2.0",
"region:us"
] | 2022-11-13T22:07:28+00:00 | {"language": ["en"], "license": "cc-by-2.0", "multilinguality": "monolingual", "pretty_name": "BioScope", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_2p0", "homepage": "https://rgai.inf.u-szeged.hu/node/105", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:44:13+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-2.0 #region-us
|
# Dataset Card for BioScope
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
The BioScope corpus consists of medical and biological texts annotated for
negation, speculation and their linguistic scope. This was done to allow a
comparison between the development of systems for negation/hedge detection and
scope resolution. The BioScope corpus was annotated by two independent linguists
following the guidelines written by our linguist expert before the annotation of
the corpus was initiated.
| [
"# Dataset Card for BioScope",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe BioScope corpus consists of medical and biological texts annotated for\nnegation, speculation and their linguistic scope. This was done to allow a\ncomparison between the development of systems for negation/hedge detection and\nscope resolution. The BioScope corpus was annotated by two independent linguists\nfollowing the guidelines written by our linguist expert before the annotation of\nthe corpus was initiated."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-2.0 #region-us \n",
"# Dataset Card for BioScope",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe BioScope corpus consists of medical and biological texts annotated for\nnegation, speculation and their linguistic scope. This was done to allow a\ncomparison between the development of systems for negation/hedge detection and\nscope resolution. The BioScope corpus was annotated by two independent linguists\nfollowing the guidelines written by our linguist expert before the annotation of\nthe corpus was initiated."
] |
3a3381d786021c39e45750956cebf254669d21f2 |
# Dataset Card for CANTEMIST
## Dataset Description
- **Homepage:** https://temu.bsc.es/cantemist/?p=4338
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED,TXTCLASS
Collection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).
The original dataset is distributed in Brat format, and was randomly sampled into 3 subsets. The training, development and test sets contain 501, 500 and 300 documents each, respectively.
This dataset was designed for the CANcer TExt Mining Shared Task, sponsored by Plan-TL. The task is divided in 3 subtasks: CANTEMIST-NER, CANTEMIST_NORM and CANTEMIST-CODING.
CANTEMIST-NER track: requires finding automatically tumor morphology mentions. All tumor morphology mentions are defined by their corresponding character offsets in UTF-8 plain text medical documents.
CANTEMIST-NORM track: clinical concept normalization or named entity normalization task that requires to return all tumor morphology entity mentions together with their corresponding eCIE-O-3.1 codes i.e. finding and normalizing tumor morphology mentions.
CANTEMIST-CODING track: requires returning for each of document a ranked list of its corresponding ICD-O-3 codes. This it is essentially a sort of indexing or multi-label classification task or oncology clinical coding.
For further information, please visit https://temu.bsc.es/cantemist or send an email to [email protected]
## Citation Information
```
@article{miranda2020named,
title={Named Entity Recognition, Concept Normalization and Clinical Coding: Overview of the Cantemist Track for Cancer Text Mining in Spanish, Corpus, Guidelines, Methods and Results.},
author={Miranda-Escalada, Antonio and Farr{'e}, Eul{\`a}lia and Krallinger, Martin},
journal={IberLEF@ SEPLN},
pages={303--323},
year={2020}
}
```
| bigbio/cantemist | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:07:32+00:00 | {"language": ["es"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "CANTEMIST", "bigbio_language": ["Spanish"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://temu.bsc.es/cantemist/?p=4338", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION", "TEXT_CLASSIFICATION"]} | 2022-12-22T15:44:17+00:00 | [] | [
"es"
] | TAGS
#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us
|
# Dataset Card for CANTEMIST
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: NER,NED,TXTCLASS
Collection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).
The original dataset is distributed in Brat format, and was randomly sampled into 3 subsets. The training, development and test sets contain 501, 500 and 300 documents each, respectively.
This dataset was designed for the CANcer TExt Mining Shared Task, sponsored by Plan-TL. The task is divided in 3 subtasks: CANTEMIST-NER, CANTEMIST_NORM and CANTEMIST-CODING.
CANTEMIST-NER track: requires finding automatically tumor morphology mentions. All tumor morphology mentions are defined by their corresponding character offsets in UTF-8 plain text medical documents.
CANTEMIST-NORM track: clinical concept normalization or named entity normalization task that requires to return all tumor morphology entity mentions together with their corresponding eCIE-O-3.1 codes i.e. finding and normalizing tumor morphology mentions.
CANTEMIST-CODING track: requires returning for each of document a ranked list of its corresponding ICD-O-3 codes. This it is essentially a sort of indexing or multi-label classification task or oncology clinical coding.
For further information, please visit URL or send an email to encargo-pln-life@URL
| [
"# Dataset Card for CANTEMIST",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,NED,TXTCLASS\n\n\nCollection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).\n\nThe original dataset is distributed in Brat format, and was randomly sampled into 3 subsets. The training, development and test sets contain 501, 500 and 300 documents each, respectively.\n\nThis dataset was designed for the CANcer TExt Mining Shared Task, sponsored by Plan-TL. The task is divided in 3 subtasks: CANTEMIST-NER, CANTEMIST_NORM and CANTEMIST-CODING.\n\nCANTEMIST-NER track: requires finding automatically tumor morphology mentions. All tumor morphology mentions are defined by their corresponding character offsets in UTF-8 plain text medical documents. \n\nCANTEMIST-NORM track: clinical concept normalization or named entity normalization task that requires to return all tumor morphology entity mentions together with their corresponding eCIE-O-3.1 codes i.e. finding and normalizing tumor morphology mentions.\n\nCANTEMIST-CODING track: requires returning for each of document a ranked list of its corresponding ICD-O-3 codes. This it is essentially a sort of indexing or multi-label classification task or oncology clinical coding. \n\nFor further information, please visit URL or send an email to encargo-pln-life@URL"
] | [
"TAGS\n#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us \n",
"# Dataset Card for CANTEMIST",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,NED,TXTCLASS\n\n\nCollection of 1301 oncological clinical case reports written in Spanish, with tumor morphology mentions manually annotated and mapped by clinical experts to a controlled terminology. Every tumor morphology mention is linked to an eCIE-O code (the Spanish equivalent of ICD-O).\n\nThe original dataset is distributed in Brat format, and was randomly sampled into 3 subsets. The training, development and test sets contain 501, 500 and 300 documents each, respectively.\n\nThis dataset was designed for the CANcer TExt Mining Shared Task, sponsored by Plan-TL. The task is divided in 3 subtasks: CANTEMIST-NER, CANTEMIST_NORM and CANTEMIST-CODING.\n\nCANTEMIST-NER track: requires finding automatically tumor morphology mentions. All tumor morphology mentions are defined by their corresponding character offsets in UTF-8 plain text medical documents. \n\nCANTEMIST-NORM track: clinical concept normalization or named entity normalization task that requires to return all tumor morphology entity mentions together with their corresponding eCIE-O-3.1 codes i.e. finding and normalizing tumor morphology mentions.\n\nCANTEMIST-CODING track: requires returning for each of document a ranked list of its corresponding ICD-O-3 codes. This it is essentially a sort of indexing or multi-label classification task or oncology clinical coding. \n\nFor further information, please visit URL or send an email to encargo-pln-life@URL"
] |
7b45cb510fffc61d8ef750ae08dce3e242c38385 |
# Dataset Card for CAS
## Dataset Description
- **Homepage:** https://clementdalloux.fr/?page_id=28
- **Pubmed:** False
- **Public:** False
- **Tasks:** TXTCLASS
We manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.
This version only contain the annotated CAS corpus
## Citation Information
```
@inproceedings{grabar-etal-2018-cas,
title = {{CAS}: {F}rench Corpus with Clinical Cases},
author = {Grabar, Natalia and Claveau, Vincent and Dalloux, Cl{'e}ment},
year = 2018,
month = oct,
booktitle = {
Proceedings of the Ninth International Workshop on Health Text Mining and
Information Analysis
},
publisher = {Association for Computational Linguistics},
address = {Brussels, Belgium},
pages = {122--128},
doi = {10.18653/v1/W18-5614},
url = {https://aclanthology.org/W18-5614},
abstract = {
Textual corpora are extremely important for various NLP applications as
they provide information necessary for creating, setting and testing these
applications and the corresponding tools. They are also crucial for
designing reliable methods and reproducible results. Yet, in some areas,
such as the medical area, due to confidentiality or to ethical reasons, it
is complicated and even impossible to access textual data representative of
those produced in these areas. We propose the CAS corpus built with
clinical cases, such as they are reported in the published scientific
literature in French. We describe this corpus, currently containing over
397,000 word occurrences, and the existing linguistic and semantic
annotations.
}
}
```
| bigbio/cas | [
"multilinguality:monolingual",
"language:fr",
"license:other",
"region:us"
] | 2022-11-13T22:07:35+00:00 | {"language": ["fr"], "license": "other", "multilinguality": "monolingual", "pretty_name": "CAS", "bigbio_language": ["French"], "bigbio_license_shortname": "DUA", "homepage": "https://clementdalloux.fr/?page_id=28", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:44:18+00:00 | [] | [
"fr"
] | TAGS
#multilinguality-monolingual #language-French #license-other #region-us
|
# Dataset Card for CAS
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: TXTCLASS
We manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.
This version only contain the annotated CAS corpus
| [
"# Dataset Card for CAS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nWe manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.\n\nThis version only contain the annotated CAS corpus"
] | [
"TAGS\n#multilinguality-monolingual #language-French #license-other #region-us \n",
"# Dataset Card for CAS",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nWe manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.\n\nThis version only contain the annotated CAS corpus"
] |
6f0dbdfff00869acca7865a6defdefa1c63397bc |
# Dataset Card for CellFinder
## Dataset Description
- **Homepage:** https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The CellFinder project aims to create a stem cell data repository by linking information from existing public databases and by performing text mining on the research literature. The first version of the corpus is composed of 10 full text documents containing more than 2,100 sentences, 65,000 tokens and 5,200 annotations for entities. The corpus has been annotated with six types of entities (anatomical parts, cell components, cell lines, cell types, genes/protein and species) with an overall inter-annotator agreement around 80%.
See: https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/
## Citation Information
```
@inproceedings{neves2012annotating,
title = {Annotating and evaluating text for stem cell research},
author = {Neves, Mariana and Damaschun, Alexander and Kurtz, Andreas and Leser, Ulf},
year = 2012,
booktitle = {
Proceedings of the Third Workshop on Building and Evaluation Resources for
Biomedical Text Mining\ (BioTxtM 2012) at Language Resources and Evaluation
(LREC). Istanbul, Turkey
},
pages = {16--23},
organization = {Citeseer}
}
```
| bigbio/cellfinder | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] | 2022-11-13T22:07:39+00:00 | {"language": ["en"], "license": "cc-by-sa-3.0", "multilinguality": "monolingual", "pretty_name": "CellFinder", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_SA_3p0", "homepage": "https://www.informatik.hu-berlin.de/de/forschung/gebiete/wbi/resources/cellfinder/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:44:19+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us
|
# Dataset Card for CellFinder
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
The CellFinder project aims to create a stem cell data repository by linking information from existing public databases and by performing text mining on the research literature. The first version of the corpus is composed of 10 full text documents containing more than 2,100 sentences, 65,000 tokens and 5,200 annotations for entities. The corpus has been annotated with six types of entities (anatomical parts, cell components, cell lines, cell types, genes/protein and species) with an overall inter-annotator agreement around 80%.
See: URL
| [
"# Dataset Card for CellFinder",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe CellFinder project aims to create a stem cell data repository by linking information from existing public databases and by performing text mining on the research literature. The first version of the corpus is composed of 10 full text documents containing more than 2,100 sentences, 65,000 tokens and 5,200 annotations for entities. The corpus has been annotated with six types of entities (anatomical parts, cell components, cell lines, cell types, genes/protein and species) with an overall inter-annotator agreement around 80%.\n\nSee: URL"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-sa-3.0 #region-us \n",
"# Dataset Card for CellFinder",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe CellFinder project aims to create a stem cell data repository by linking information from existing public databases and by performing text mining on the research literature. The first version of the corpus is composed of 10 full text documents containing more than 2,100 sentences, 65,000 tokens and 5,200 annotations for entities. The corpus has been annotated with six types of entities (anatomical parts, cell components, cell lines, cell types, genes/protein and species) with an overall inter-annotator agreement around 80%.\n\nSee: URL"
] |
1edad51fe17c6e0321a7aaebfefecda9347783c6 |
# Dataset Card for CHEBI Corpus
## Dataset Description
- **Homepage:** http://www.nactem.ac.uk/chebi
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE
The ChEBI corpus contains 199 annotated abstracts and 100 annotated full papers.
All documents in the corpus have been annotated for named entities and relations
between these. In total, our corpus provides over 15000 named entity annotations
and over 6,000 relations between entities.
## Citation Information
```
@inproceedings{Shardlow2018,
title = {
A New Corpus to Support Text Mining for the Curation of Metabolites in the
{ChEBI} Database
},
author = {
Shardlow, M J and Nguyen, N and Owen, G and O'Donovan, C and Leach, A and
McNaught, J and Turner, S and Ananiadou, S
},
year = 2018,
month = may,
booktitle = {
Proceedings of the Eleventh International Conference on Language Resources
and Evaluation ({LREC} 2018)
},
location = {Miyazaki, Japan},
pages = {280--285},
conference = {
Eleventh International Conference on Language Resources and Evaluation
(LREC 2018)
},
language = {en}
}
```
| bigbio/chebi_nactem | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:07:43+00:00 | {"language": ["en"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "CHEBI Corpus", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "http://www.nactem.ac.uk/chebi", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2022-12-22T15:44:20+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for CHEBI Corpus
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,RE
The ChEBI corpus contains 199 annotated abstracts and 100 annotated full papers.
All documents in the corpus have been annotated for named entities and relations
between these. In total, our corpus provides over 15000 named entity annotations
and over 6,000 relations between entities.
| [
"# Dataset Card for CHEBI Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nThe ChEBI corpus contains 199 annotated abstracts and 100 annotated full papers.\nAll documents in the corpus have been annotated for named entities and relations\nbetween these. In total, our corpus provides over 15000 named entity annotations\nand over 6,000 relations between entities."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for CHEBI Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nThe ChEBI corpus contains 199 annotated abstracts and 100 annotated full papers.\nAll documents in the corpus have been annotated for named entities and relations\nbetween these. In total, our corpus provides over 15000 named entity annotations\nand over 6,000 relations between entities."
] |
4c4cf9abe6937e2063db3a5d331cc50a7af37f59 |
# Dataset Card for CHEMDNER
## Dataset Description
- **Homepage:** https://biocreative.bioinformatics.udel.edu/resources/biocreative-iv/chemdner-corpus/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,TXTCLASS
We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that
contain a total of 84,355 chemical entity mentions labeled manually by expert
chemistry literature curators, following annotation guidelines specifically
defined for this task. The abstracts of the CHEMDNER corpus were selected to be
representative for all major chemical disciplines. Each of the chemical entity
mentions was manually labeled according to its structure-associated chemical
entity mention (SACEM) class: abbreviation, family, formula, identifier,
multiple, systematic and trivial.
## Citation Information
```
@article{Krallinger2015,
title = {The CHEMDNER corpus of chemicals and drugs and its annotation principles},
author = {
Krallinger, Martin and Rabal, Obdulia and Leitner, Florian and Vazquez,
Miguel and Salgado, David and Lu, Zhiyong and Leaman, Robert and Lu, Yanan
and Ji, Donghong and Lowe, Daniel M. and Sayle, Roger A. and
Batista-Navarro, Riza Theresa and Rak, Rafal and Huber, Torsten and
Rockt{"a}schel, Tim and Matos, S{'e}rgio and Campos, David and Tang,
Buzhou and Xu, Hua and Munkhdalai, Tsendsuren and Ryu, Keun Ho and Ramanan,
S. V. and Nathan, Senthil and {{Z}}itnik, Slavko and Bajec, Marko and
Weber, Lutz and Irmer, Matthias and Akhondi, Saber A. and Kors, Jan A. and
Xu, Shuo and An, Xin and Sikdar, Utpal Kumar and Ekbal, Asif and Yoshioka,
Masaharu and Dieb, Thaer M. and Choi, Miji and Verspoor, Karin and Khabsa,
Madian and Giles, C. Lee and Liu, Hongfang and Ravikumar, Komandur
Elayavilli and Lamurias, Andre and Couto, Francisco M. and Dai, Hong-Jie
and Tsai, Richard Tzong-Han and Ata, Caglar and Can, Tolga and Usi{'e},
Anabel and Alves, Rui and Segura-Bedmar, Isabel and Mart{'i}nez, Paloma
and Oyarzabal, Julen and Valencia, Alfonso
},
year = 2015,
month = {Jan},
day = 19,
journal = {Journal of Cheminformatics},
volume = 7,
number = 1,
pages = {S2},
doi = {10.1186/1758-2946-7-S1-S2},
issn = {1758-2946},
url = {https://doi.org/10.1186/1758-2946-7-S1-S2},
abstract = {
The automatic extraction of chemical information from text requires the
recognition of chemical entity mentions as one of its key steps. When
developing supervised named entity recognition (NER) systems, the
availability of a large, manually annotated text corpus is desirable.
Furthermore, large corpora permit the robust evaluation and comparison of
different approaches that detect chemicals in documents. We present the
CHEMDNER corpus, a collection of 10,000 PubMed abstracts that contain a
total of 84,355 chemical entity mentions labeled manually by expert
chemistry literature curators, following annotation guidelines specifically
defined for this task. The abstracts of the CHEMDNER corpus were selected
to be representative for all major chemical disciplines. Each of the
chemical entity mentions was manually labeled according to its
structure-associated chemical entity mention (SACEM) class: abbreviation,
family, formula, identifier, multiple, systematic and trivial. The
difficulty and consistency of tagging chemicals in text was measured using
an agreement study between annotators, obtaining a percentage agreement of
91. For a subset of the CHEMDNER corpus (the test set of 3,000 abstracts)
we provide not only the Gold Standard manual annotations, but also mentions
automatically detected by the 26 teams that participated in the BioCreative
IV CHEMDNER chemical mention recognition task. In addition, we release the
CHEMDNER silver standard corpus of automatically extracted mentions from
17,000 randomly selected PubMed abstracts. A version of the CHEMDNER corpus
in the BioC format has been generated as well. We propose a standard for
required minimum information about entity annotations for the construction
of domain specific corpora on chemical and drug entities. The CHEMDNER
corpus and annotation guidelines are available at:
ttp://www.biocreative.org/resources/biocreative-iv/chemdner-corpus/
}
}
```
| bigbio/chemdner | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:07:46+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "CHEMDNER", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://biocreative.bioinformatics.udel.edu/resources/biocreative-iv/chemdner-corpus/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "TEXT_CLASSIFICATION"]} | 2022-12-22T15:44:21+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for CHEMDNER
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,TXTCLASS
We present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that
contain a total of 84,355 chemical entity mentions labeled manually by expert
chemistry literature curators, following annotation guidelines specifically
defined for this task. The abstracts of the CHEMDNER corpus were selected to be
representative for all major chemical disciplines. Each of the chemical entity
mentions was manually labeled according to its structure-associated chemical
entity mention (SACEM) class: abbreviation, family, formula, identifier,
multiple, systematic and trivial.
| [
"# Dataset Card for CHEMDNER",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,TXTCLASS\n\n\nWe present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that\ncontain a total of 84,355 chemical entity mentions labeled manually by expert\nchemistry literature curators, following annotation guidelines specifically\ndefined for this task. The abstracts of the CHEMDNER corpus were selected to be\nrepresentative for all major chemical disciplines. Each of the chemical entity\nmentions was manually labeled according to its structure-associated chemical\nentity mention (SACEM) class: abbreviation, family, formula, identifier,\nmultiple, systematic and trivial."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for CHEMDNER",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,TXTCLASS\n\n\nWe present the CHEMDNER corpus, a collection of 10,000 PubMed abstracts that\ncontain a total of 84,355 chemical entity mentions labeled manually by expert\nchemistry literature curators, following annotation guidelines specifically\ndefined for this task. The abstracts of the CHEMDNER corpus were selected to be\nrepresentative for all major chemical disciplines. Each of the chemical entity\nmentions was manually labeled according to its structure-associated chemical\nentity mention (SACEM) class: abbreviation, family, formula, identifier,\nmultiple, systematic and trivial."
] |
86afccf3ccc614f817a7fad0692bf62fbc5ce469 |
# Dataset Card for ChemProt
## Dataset Description
- **Homepage:** https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vi/track-5/
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE,NER
The BioCreative VI Chemical-Protein interaction dataset identifies entities of
chemicals and proteins and their likely relation to one other. Compounds are
generally agonists (activators) or antagonists (inhibitors) of proteins.
## Citation Information
```
@article{DBLP:journals/biodb/LiSJSWLDMWL16,
author = {Krallinger, M., Rabal, O., Lourenço, A.},
title = {Overview of the BioCreative VI chemical-protein interaction Track},
journal = {Proceedings of the BioCreative VI Workshop,},
volume = {141-146},
year = {2017},
url = {https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vi/track-5/},
doi = {},
biburl = {},
bibsource = {}
}
```
| bigbio/chemprot | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:07:50+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "ChemProt", "bigbio_language": ["English"], "bigbio_license_shortname": "PUBLIC_DOMAIN_MARK_1p0", "homepage": "https://biocreative.bioinformatics.udel.edu/tasks/biocreative-vi/track-5/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["RELATION_EXTRACTION", "NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:44:22+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for ChemProt
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: RE,NER
The BioCreative VI Chemical-Protein interaction dataset identifies entities of
chemicals and proteins and their likely relation to one other. Compounds are
generally agonists (activators) or antagonists (inhibitors) of proteins.
| [
"# Dataset Card for ChemProt",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE,NER\n\n\nThe BioCreative VI Chemical-Protein interaction dataset identifies entities of\nchemicals and proteins and their likely relation to one other. Compounds are\ngenerally agonists (activators) or antagonists (inhibitors) of proteins."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for ChemProt",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE,NER\n\n\nThe BioCreative VI Chemical-Protein interaction dataset identifies entities of\nchemicals and proteins and their likely relation to one other. Compounds are\ngenerally agonists (activators) or antagonists (inhibitors) of proteins."
] |
36e5df0d60dfc5152cd22a807ade73f135105008 |
# Dataset Card for CHIA
## Dataset Description
- **Homepage:** https://github.com/WengLab-InformaticsResearch/CHIA
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,RE
A large annotated corpus of patient eligibility criteria extracted from 1,000
interventional, Phase IV clinical trials registered in ClinicalTrials.gov. This
dataset includes 12,409 annotated eligibility criteria, represented by 41,487
distinctive entities of 15 entity types and 25,017 relationships of 12
relationship types.
## Citation Information
```
@article{kury2020chia,
title = {Chia, a large annotated corpus of clinical trial eligibility criteria},
author = {
Kury, Fabr{'\i}cio and Butler, Alex and Yuan, Chi and Fu, Li-heng and
Sun, Yingcheng and Liu, Hao and Sim, Ida and Carini, Simona and Weng,
Chunhua
},
year = 2020,
journal = {Scientific data},
publisher = {Nature Publishing Group},
volume = 7,
number = 1,
pages = {1--11}
}
```
| bigbio/chia | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:07:53+00:00 | {"language": ["en"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "CHIA", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://github.com/WengLab-InformaticsResearch/CHIA", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2022-12-22T15:44:25+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us
|
# Dataset Card for CHIA
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: NER,RE
A large annotated corpus of patient eligibility criteria extracted from 1,000
interventional, Phase IV clinical trials registered in URL. This
dataset includes 12,409 annotated eligibility criteria, represented by 41,487
distinctive entities of 15 entity types and 25,017 relationships of 12
relationship types.
| [
"# Dataset Card for CHIA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,RE\n\n\nA large annotated corpus of patient eligibility criteria extracted from 1,000\ninterventional, Phase IV clinical trials registered in URL. This\ndataset includes 12,409 annotated eligibility criteria, represented by 41,487\ndistinctive entities of 15 entity types and 25,017 relationships of 12\nrelationship types."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-4.0 #region-us \n",
"# Dataset Card for CHIA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,RE\n\n\nA large annotated corpus of patient eligibility criteria extracted from 1,000\ninterventional, Phase IV clinical trials registered in URL. This\ndataset includes 12,409 annotated eligibility criteria, represented by 41,487\ndistinctive entities of 15 entity types and 25,017 relationships of 12\nrelationship types."
] |
77cd7e86dc1da3ccfd4a97049a6a28900fc3f88f |
# Dataset Card for Citation GIA Test Collection
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/gnormplus/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
The Citation GIA Test Collection was recently created for gene indexing at the
NLM and includes 151 PubMed abstracts with both mention-level and document-level
annotations. They are selected because both have a focus on human genes.
## Citation Information
```
@article{Wei2015,
title = {
{GNormPlus}: An Integrative Approach for Tagging Genes, Gene Families,
and Protein Domains
},
author = {Chih-Hsuan Wei and Hung-Yu Kao and Zhiyong Lu},
year = 2015,
journal = {{BioMed} Research International},
publisher = {Hindawi Limited},
volume = 2015,
pages = {1--7},
doi = {10.1155/2015/918710},
url = {https://doi.org/10.1155/2015/918710}
}
```
| bigbio/citation_gia_test_collection | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:07:57+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "Citation GIA Test Collection", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/gnormplus/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:44:27+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for Citation GIA Test Collection
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
The Citation GIA Test Collection was recently created for gene indexing at the
NLM and includes 151 PubMed abstracts with both mention-level and document-level
annotations. They are selected because both have a focus on human genes.
| [
"# Dataset Card for Citation GIA Test Collection",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe Citation GIA Test Collection was recently created for gene indexing at the\nNLM and includes 151 PubMed abstracts with both mention-level and document-level\nannotations. They are selected because both have a focus on human genes."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for Citation GIA Test Collection",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nThe Citation GIA Test Collection was recently created for gene indexing at the\nNLM and includes 151 PubMed abstracts with both mention-level and document-level\nannotations. They are selected because both have a focus on human genes."
] |
a0ae1d2efa285d33084e35bbb25f56f28360aef6 |
# Dataset Card for CodiEsp
## Dataset Description
- **Homepage:** https://temu.bsc.es/codiesp/
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS,NER,NED
Synthetic corpus of 1,000 manually selected clinical case studies in Spanish
that was designed for the Clinical Case Coding in Spanish Shared Task, as part
of the CLEF 2020 conference.
The goal of the task was to automatically assign ICD10 codes (CIE-10, in
Spanish) to clinical case documents, being evaluated against manually generated
ICD10 codifications. The CodiEsp corpus was selected manually by practicing
physicians and clinical documentalists and annotated by clinical coding
professionals meeting strict quality criteria. They reached an inter-annotator
agreement of 88.6% for diagnosis coding, 88.9% for procedure coding and 80.5%
for the textual reference annotation.
The final collection of 1,000 clinical cases that make up the corpus had a total
of 16,504 sentences and 396,988 words. All documents are in Spanish language and
CIE10 is the coding terminology (the Spanish version of ICD10-CM and ICD10-PCS).
The CodiEsp corpus has been randomly sampled into three subsets. The train set
contains 500 clinical cases, while the development and test sets have 250
clinical cases each. In addition to these, a collection of 176,294 abstracts
from Lilacs and Ibecs with the corresponding ICD10 codes (ICD10-CM and
ICD10-PCS) was provided by the task organizers. Every abstract has at least one
associated code, with an average of 2.5 ICD10 codes per abstract.
The CodiEsp track was divided into three sub-tracks (2 main and 1 exploratory):
- CodiEsp-D: The Diagnosis Coding sub-task, which requires automatic ICD10-CM
[CIE10-Diagnóstico] code assignment.
- CodiEsp-P: The Procedure Coding sub-task, which requires automatic ICD10-PCS
[CIE10-Procedimiento] code assignment.
- CodiEsp-X: The Explainable AI exploratory sub-task, which requires to submit
the reference to the predicted codes (both ICD10-CM and ICD10-PCS). The goal
of this novel task was not only to predict the correct codes but also to
present the reference in the text that supports the code predictions.
For further information, please visit https://temu.bsc.es/codiesp or send an
email to [email protected]
## Citation Information
```
@article{miranda2020overview,
title={Overview of Automatic Clinical Coding: Annotations, Guidelines, and Solutions for non-English Clinical Cases at CodiEsp Track of CLEF eHealth 2020.},
author={Miranda-Escalada, Antonio and Gonzalez-Agirre, Aitor and Armengol-Estap{'e}, Jordi and Krallinger, Martin},
journal={CLEF (Working Notes)},
volume={2020},
year={2020}
}
```
| bigbio/codiesp | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:08:01+00:00 | {"language": ["es"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "CodiEsp", "bigbio_language": ["Spanish"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://temu.bsc.es/codiesp/", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXT_CLASSIFICATION", "NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2022-12-22T15:44:28+00:00 | [] | [
"es"
] | TAGS
#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us
|
# Dataset Card for CodiEsp
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TXTCLASS,NER,NED
Synthetic corpus of 1,000 manually selected clinical case studies in Spanish
that was designed for the Clinical Case Coding in Spanish Shared Task, as part
of the CLEF 2020 conference.
The goal of the task was to automatically assign ICD10 codes (CIE-10, in
Spanish) to clinical case documents, being evaluated against manually generated
ICD10 codifications. The CodiEsp corpus was selected manually by practicing
physicians and clinical documentalists and annotated by clinical coding
professionals meeting strict quality criteria. They reached an inter-annotator
agreement of 88.6% for diagnosis coding, 88.9% for procedure coding and 80.5%
for the textual reference annotation.
The final collection of 1,000 clinical cases that make up the corpus had a total
of 16,504 sentences and 396,988 words. All documents are in Spanish language and
CIE10 is the coding terminology (the Spanish version of ICD10-CM and ICD10-PCS).
The CodiEsp corpus has been randomly sampled into three subsets. The train set
contains 500 clinical cases, while the development and test sets have 250
clinical cases each. In addition to these, a collection of 176,294 abstracts
from Lilacs and Ibecs with the corresponding ICD10 codes (ICD10-CM and
ICD10-PCS) was provided by the task organizers. Every abstract has at least one
associated code, with an average of 2.5 ICD10 codes per abstract.
The CodiEsp track was divided into three sub-tracks (2 main and 1 exploratory):
- CodiEsp-D: The Diagnosis Coding sub-task, which requires automatic ICD10-CM
[CIE10-Diagnóstico] code assignment.
- CodiEsp-P: The Procedure Coding sub-task, which requires automatic ICD10-PCS
[CIE10-Procedimiento] code assignment.
- CodiEsp-X: The Explainable AI exploratory sub-task, which requires to submit
the reference to the predicted codes (both ICD10-CM and ICD10-PCS). The goal
of this novel task was not only to predict the correct codes but also to
present the reference in the text that supports the code predictions.
For further information, please visit URL or send an
email to encargo-pln-life@URL
| [
"# Dataset Card for CodiEsp",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS,NER,NED\n\n\nSynthetic corpus of 1,000 manually selected clinical case studies in Spanish\nthat was designed for the Clinical Case Coding in Spanish Shared Task, as part\nof the CLEF 2020 conference.\n\nThe goal of the task was to automatically assign ICD10 codes (CIE-10, in\nSpanish) to clinical case documents, being evaluated against manually generated\nICD10 codifications. The CodiEsp corpus was selected manually by practicing\nphysicians and clinical documentalists and annotated by clinical coding\nprofessionals meeting strict quality criteria. They reached an inter-annotator\nagreement of 88.6% for diagnosis coding, 88.9% for procedure coding and 80.5%\nfor the textual reference annotation.\n\nThe final collection of 1,000 clinical cases that make up the corpus had a total\nof 16,504 sentences and 396,988 words. All documents are in Spanish language and\nCIE10 is the coding terminology (the Spanish version of ICD10-CM and ICD10-PCS).\nThe CodiEsp corpus has been randomly sampled into three subsets. The train set\ncontains 500 clinical cases, while the development and test sets have 250\nclinical cases each. In addition to these, a collection of 176,294 abstracts\nfrom Lilacs and Ibecs with the corresponding ICD10 codes (ICD10-CM and\nICD10-PCS) was provided by the task organizers. Every abstract has at least one\nassociated code, with an average of 2.5 ICD10 codes per abstract.\n\nThe CodiEsp track was divided into three sub-tracks (2 main and 1 exploratory):\n\n- CodiEsp-D: The Diagnosis Coding sub-task, which requires automatic ICD10-CM\n [CIE10-Diagnóstico] code assignment.\n- CodiEsp-P: The Procedure Coding sub-task, which requires automatic ICD10-PCS\n [CIE10-Procedimiento] code assignment.\n- CodiEsp-X: The Explainable AI exploratory sub-task, which requires to submit\n the reference to the predicted codes (both ICD10-CM and ICD10-PCS). The goal \n of this novel task was not only to predict the correct codes but also to \n present the reference in the text that supports the code predictions.\n\nFor further information, please visit URL or send an\nemail to encargo-pln-life@URL"
] | [
"TAGS\n#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us \n",
"# Dataset Card for CodiEsp",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS,NER,NED\n\n\nSynthetic corpus of 1,000 manually selected clinical case studies in Spanish\nthat was designed for the Clinical Case Coding in Spanish Shared Task, as part\nof the CLEF 2020 conference.\n\nThe goal of the task was to automatically assign ICD10 codes (CIE-10, in\nSpanish) to clinical case documents, being evaluated against manually generated\nICD10 codifications. The CodiEsp corpus was selected manually by practicing\nphysicians and clinical documentalists and annotated by clinical coding\nprofessionals meeting strict quality criteria. They reached an inter-annotator\nagreement of 88.6% for diagnosis coding, 88.9% for procedure coding and 80.5%\nfor the textual reference annotation.\n\nThe final collection of 1,000 clinical cases that make up the corpus had a total\nof 16,504 sentences and 396,988 words. All documents are in Spanish language and\nCIE10 is the coding terminology (the Spanish version of ICD10-CM and ICD10-PCS).\nThe CodiEsp corpus has been randomly sampled into three subsets. The train set\ncontains 500 clinical cases, while the development and test sets have 250\nclinical cases each. In addition to these, a collection of 176,294 abstracts\nfrom Lilacs and Ibecs with the corresponding ICD10 codes (ICD10-CM and\nICD10-PCS) was provided by the task organizers. Every abstract has at least one\nassociated code, with an average of 2.5 ICD10 codes per abstract.\n\nThe CodiEsp track was divided into three sub-tracks (2 main and 1 exploratory):\n\n- CodiEsp-D: The Diagnosis Coding sub-task, which requires automatic ICD10-CM\n [CIE10-Diagnóstico] code assignment.\n- CodiEsp-P: The Procedure Coding sub-task, which requires automatic ICD10-PCS\n [CIE10-Procedimiento] code assignment.\n- CodiEsp-X: The Explainable AI exploratory sub-task, which requires to submit\n the reference to the predicted codes (both ICD10-CM and ICD10-PCS). The goal \n of this novel task was not only to predict the correct codes but also to \n present the reference in the text that supports the code predictions.\n\nFor further information, please visit URL or send an\nemail to encargo-pln-life@URL"
] |
07eabc59f1564280eedc8992b62ed6a8f456e4d2 |
# Dataset Card for CT-EBM-SP
## Dataset Description
- **Homepage:** http://www.lllf.uam.es/ESP/nlpmedterm_en.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
### Ctebmsp Abstracts
The "abstracts" subset of the Clinical Trials for Evidence-Based Medicine in Spanish
(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,
published in journals with a Creative Commons license. Most were downloaded from
the SciELO repository and free abstracts in PubMed.
Abstracts were retrieved with the query:
Clinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].
(Information collected from 10.1186/s12911-021-01395-z)
### Ctebmsp Eudract
The "abstracts" subset of the Clinical Trials for Evidence-Based Medicine in Spanish
(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,
published in journals with a Creative Commons license. Most were downloaded from
the SciELO repository and free abstracts in PubMed.
Abstracts were retrieved with the query:
Clinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].
(Information collected from 10.1186/s12911-021-01395-z)
## Citation Information
```
@article{CampillosLlanos2021,
author = {Leonardo Campillos-Llanos and
Ana Valverde-Mateos and
Adri{'{a}}n Capllonch-Carri{'{o}}n and
Antonio Moreno-Sandoval},
title = {A clinical trials corpus annotated with {UMLS}
entities to enhance the access to evidence-based medicine},
journal = {{BMC} Medical Informatics and Decision Making},
volume = {21},
year = {2021},
url = {https://doi.org/10.1186/s12911-021-01395-z},
doi = {10.1186/s12911-021-01395-z},
biburl = {},
bibsource = {}
}
```
| bigbio/ctebmsp | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-11-13T22:08:04+00:00 | {"language": ["es"], "license": "cc-by-nc-4.0", "multilinguality": "monolingual", "pretty_name": "CT-EBM-SP", "bigbio_language": ["Spanish"], "bigbio_license_shortname": "CC_BY_NC_4p0", "homepage": "http://www.lllf.uam.es/ESP/nlpmedterm_en.html", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:44:30+00:00 | [] | [
"es"
] | TAGS
#multilinguality-monolingual #language-Spanish #license-cc-by-nc-4.0 #region-us
|
# Dataset Card for CT-EBM-SP
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
### Ctebmsp Abstracts
The "abstracts" subset of the Clinical Trials for Evidence-Based Medicine in Spanish
(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,
published in journals with a Creative Commons license. Most were downloaded from
the SciELO repository and free abstracts in PubMed.
Abstracts were retrieved with the query:
Clinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].
(Information collected from 10.1186/s12911-021-01395-z)
### Ctebmsp Eudract
The "abstracts" subset of the Clinical Trials for Evidence-Based Medicine in Spanish
(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,
published in journals with a Creative Commons license. Most were downloaded from
the SciELO repository and free abstracts in PubMed.
Abstracts were retrieved with the query:
Clinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].
(Information collected from 10.1186/s12911-021-01395-z)
| [
"# Dataset Card for CT-EBM-SP",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER",
"### Ctebmsp Abstracts\n\nThe \"abstracts\" subset of the Clinical Trials for Evidence-Based Medicine in Spanish\n(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,\npublished in journals with a Creative Commons license. Most were downloaded from\nthe SciELO repository and free abstracts in PubMed.\n\nAbstracts were retrieved with the query:\nClinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].\n\n(Information collected from 10.1186/s12911-021-01395-z)",
"### Ctebmsp Eudract\n\nThe \"abstracts\" subset of the Clinical Trials for Evidence-Based Medicine in Spanish\n(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,\npublished in journals with a Creative Commons license. Most were downloaded from\nthe SciELO repository and free abstracts in PubMed.\n\nAbstracts were retrieved with the query:\nClinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].\n\n(Information collected from 10.1186/s12911-021-01395-z)"
] | [
"TAGS\n#multilinguality-monolingual #language-Spanish #license-cc-by-nc-4.0 #region-us \n",
"# Dataset Card for CT-EBM-SP",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER",
"### Ctebmsp Abstracts\n\nThe \"abstracts\" subset of the Clinical Trials for Evidence-Based Medicine in Spanish\n(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,\npublished in journals with a Creative Commons license. Most were downloaded from\nthe SciELO repository and free abstracts in PubMed.\n\nAbstracts were retrieved with the query:\nClinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].\n\n(Information collected from 10.1186/s12911-021-01395-z)",
"### Ctebmsp Eudract\n\nThe \"abstracts\" subset of the Clinical Trials for Evidence-Based Medicine in Spanish\n(CT-EBM-SP) corpus contains 500 abstracts of clinical trial studies in Spanish,\npublished in journals with a Creative Commons license. Most were downloaded from\nthe SciELO repository and free abstracts in PubMed.\n\nAbstracts were retrieved with the query:\nClinical Trial[ptyp] AND “loattrfree full text”[sb] AND “spanish”[la].\n\n(Information collected from 10.1186/s12911-021-01395-z)"
] |
da8e94986a0c689095b22bed134248b11f9311c7 |
# Dataset Card for DDI Corpus
## Dataset Description
- **Homepage:** https://github.com/isegura/DDICorpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE
The DDI corpus has been manually annotated with drugs and pharmacokinetics and pharmacodynamics interactions. It contains 1025 documents from two different sources: DrugBank database and MedLine.
## Citation Information
```
@article{HERREROZAZO2013914,
title = {
The DDI corpus: An annotated corpus with pharmacological substances and
drug-drug interactions
},
author = {
María Herrero-Zazo and Isabel Segura-Bedmar and Paloma Martínez and Thierry
Declerck
},
year = 2013,
journal = {Journal of Biomedical Informatics},
volume = 46,
number = 5,
pages = {914--920},
doi = {https://doi.org/10.1016/j.jbi.2013.07.011},
issn = {1532-0464},
url = {https://www.sciencedirect.com/science/article/pii/S1532046413001123},
keywords = {Biomedical corpora, Drug interaction, Information extraction}
}
```
| bigbio/ddi_corpus | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-11-13T22:08:08+00:00 | {"language": ["en"], "license": "cc-by-nc-4.0", "multilinguality": "monolingual", "pretty_name": "DDI Corpus", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_NC_4p0", "homepage": "https://github.com/isegura/DDICorpus", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2022-12-22T15:44:31+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-nc-4.0 #region-us
|
# Dataset Card for DDI Corpus
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,RE
The DDI corpus has been manually annotated with drugs and pharmacokinetics and pharmacodynamics interactions. It contains 1025 documents from two different sources: DrugBank database and MedLine.
| [
"# Dataset Card for DDI Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nThe DDI corpus has been manually annotated with drugs and pharmacokinetics and pharmacodynamics interactions. It contains 1025 documents from two different sources: DrugBank database and MedLine."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-nc-4.0 #region-us \n",
"# Dataset Card for DDI Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nThe DDI corpus has been manually annotated with drugs and pharmacokinetics and pharmacodynamics interactions. It contains 1025 documents from two different sources: DrugBank database and MedLine."
] |
fcf2be22093d904d325634943912bd0739cacdb1 |
# Dataset Card for DisTEMIST
## Dataset Description
- **Homepage:** https://zenodo.org/record/6671292
- **Pubmed:** False
- **Public:** True
- **Tasks:** NER,NED
The DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts.
All documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022.
## Citation Information
```
@article{miranda2022overview,
title={Overview of DisTEMIST at BioASQ: Automatic detection and normalization of diseases
from clinical texts: results, methods, evaluation and multilingual resources},
author={Miranda-Escalada, Antonio and Gascó, Luis and Lima-López, Salvador and Farré-Maduell,
Eulàlia and Estrada, Darryl and Nentidis, Anastasios and Krithara, Anastasia and Katsimpras,
Georgios and Paliouras, Georgios and Krallinger, Martin},
booktitle={Working Notes of Conference and Labs of the Evaluation (CLEF) Forum.
CEUR Workshop Proceedings},
year={2022}
}
```
| bigbio/distemist | [
"multilinguality:monolingual",
"language:es",
"license:cc-by-4.0",
"region:us"
] | 2022-11-13T22:08:11+00:00 | {"language": ["es"], "license": "cc-by-4.0", "multilinguality": "monolingual", "pretty_name": "DisTEMIST", "bigbio_language": ["Spanish"], "bigbio_license_shortname": "CC_BY_4p0", "homepage": "https://zenodo.org/record/6671292", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2023-04-01T15:51:57+00:00 | [] | [
"es"
] | TAGS
#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us
|
# Dataset Card for DisTEMIST
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: NER,NED
The DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts.
All documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022.
| [
"# Dataset Card for DisTEMIST",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,NED\n\n\nThe DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts.\nAll documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022."
] | [
"TAGS\n#multilinguality-monolingual #language-Spanish #license-cc-by-4.0 #region-us \n",
"# Dataset Card for DisTEMIST",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: NER,NED\n\n\nThe DisTEMIST corpus is a collection of 1000 clinical cases with disease annotations linked with Snomed-CT concepts.\nAll documents are released in the context of the BioASQ DisTEMIST track for CLEF 2022."
] |
0d5ce09f87c6d144b107438c5ce2b70a9f0b2800 |
# Dataset Card for EBM NLP
## Dataset Description
- **Homepage:** https://github.com/bepnye/EBM-NLP
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
This corpus release contains 4,993 abstracts annotated with (P)articipants,
(I)nterventions, and (O)utcomes. Training labels are sourced from AMT workers and
aggregated to reduce noise. Test labels are collected from medical professionals.
## Citation Information
```
@inproceedings{nye-etal-2018-corpus,
title = "A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature",
author = "Nye, Benjamin and
Li, Junyi Jessy and
Patel, Roma and
Yang, Yinfei and
Marshall, Iain and
Nenkova, Ani and
Wallace, Byron",
booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2018",
address = "Melbourne, Australia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/P18-1019",
doi = "10.18653/v1/P18-1019",
pages = "197--207",
}
```
| bigbio/ebm_pico | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:08:15+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "EBM NLP", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://github.com/bepnye/EBM-NLP", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:44:33+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for EBM NLP
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
This corpus release contains 4,993 abstracts annotated with (P)articipants,
(I)nterventions, and (O)utcomes. Training labels are sourced from AMT workers and
aggregated to reduce noise. Test labels are collected from medical professionals.
| [
"# Dataset Card for EBM NLP",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThis corpus release contains 4,993 abstracts annotated with (P)articipants,\n(I)nterventions, and (O)utcomes. Training labels are sourced from AMT workers and\naggregated to reduce noise. Test labels are collected from medical professionals."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for EBM NLP",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThis corpus release contains 4,993 abstracts annotated with (P)articipants,\n(I)nterventions, and (O)utcomes. Training labels are sourced from AMT workers and\naggregated to reduce noise. Test labels are collected from medical professionals."
] |
8c8977320edb8364f75f5ec12f495371570d9dcc |
# Dataset Card for EHR-Rel
## Dataset Description
- **Homepage:** https://github.com/babylonhealth/EHR-Rel
- **Pubmed:** False
- **Public:** True
- **Tasks:** STS
EHR-Rel is a novel open-source1 biomedical concept relatedness dataset consisting of 3630 concept pairs, six times more
than the largest existing dataset. Instead of manually selecting and pairing concepts as done in previous work,
the dataset is sampled from EHRs to ensure concepts are relevant for the EHR concept retrieval task.
A detailed analysis of the concepts in the dataset reveals a far larger coverage compared to existing datasets.
## Citation Information
```
@inproceedings{schulz-etal-2020-biomedical,
title = {Biomedical Concept Relatedness {--} A large {EHR}-based benchmark},
author = {Schulz, Claudia and
Levy-Kramer, Josh and
Van Assel, Camille and
Kepes, Miklos and
Hammerla, Nils},
booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
month = {dec},
year = {2020},
address = {Barcelona, Spain (Online)},
publisher = {International Committee on Computational Linguistics},
url = {https://aclanthology.org/2020.coling-main.577},
doi = {10.18653/v1/2020.coling-main.577},
pages = {6565--6575},
}
```
| bigbio/ehr_rel | [
"multilinguality:monolingual",
"language:en",
"license:apache-2.0",
"region:us"
] | 2022-11-13T22:08:18+00:00 | {"language": ["en"], "license": "apache-2.0", "multilinguality": "monolingual", "pretty_name": "EHR-Rel", "bigbio_language": ["English"], "bigbio_license_shortname": "APACHE_2p0", "homepage": "https://github.com/babylonhealth/EHR-Rel", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["SEMANTIC_SIMILARITY"]} | 2022-12-22T15:44:34+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-apache-2.0 #region-us
|
# Dataset Card for EHR-Rel
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: STS
EHR-Rel is a novel open-source1 biomedical concept relatedness dataset consisting of 3630 concept pairs, six times more
than the largest existing dataset. Instead of manually selecting and pairing concepts as done in previous work,
the dataset is sampled from EHRs to ensure concepts are relevant for the EHR concept retrieval task.
A detailed analysis of the concepts in the dataset reveals a far larger coverage compared to existing datasets.
| [
"# Dataset Card for EHR-Rel",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nEHR-Rel is a novel open-source1 biomedical concept relatedness dataset consisting of 3630 concept pairs, six times more\nthan the largest existing dataset. Instead of manually selecting and pairing concepts as done in previous work,\nthe dataset is sampled from EHRs to ensure concepts are relevant for the EHR concept retrieval task.\nA detailed analysis of the concepts in the dataset reveals a far larger coverage compared to existing datasets."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-apache-2.0 #region-us \n",
"# Dataset Card for EHR-Rel",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: STS\n\n\nEHR-Rel is a novel open-source1 biomedical concept relatedness dataset consisting of 3630 concept pairs, six times more\nthan the largest existing dataset. Instead of manually selecting and pairing concepts as done in previous work,\nthe dataset is sampled from EHRs to ensure concepts are relevant for the EHR concept retrieval task.\nA detailed analysis of the concepts in the dataset reveals a far larger coverage compared to existing datasets."
] |
ffb662c6f9b930215622cf156a83090b74d28472 |
# Dataset Card for ESSAI
## Dataset Description
- **Homepage:** https://clementdalloux.fr/?page_id=28
- **Pubmed:** False
- **Public:** False
- **Tasks:** TXTCLASS
We manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.
This version only contain the annotated ESSAI corpus
## Citation Information
```
@misc{dalloux, title={Datasets – Clément Dalloux}, url={http://clementdalloux.fr/?page_id=28}, journal={Clément Dalloux}, author={Dalloux, Clément}}
```
| bigbio/essai | [
"multilinguality:monolingual",
"language:fr",
"license:other",
"region:us"
] | 2022-11-13T22:08:22+00:00 | {"language": ["fr"], "license": "other", "multilinguality": "monolingual", "pretty_name": "ESSAI", "bigbio_language": ["French"], "bigbio_license_shortname": "DUA", "homepage": "https://clementdalloux.fr/?page_id=28", "bigbio_pubmed": false, "bigbio_public": false, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:44:35+00:00 | [] | [
"fr"
] | TAGS
#multilinguality-monolingual #language-French #license-other #region-us
|
# Dataset Card for ESSAI
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: False
- Tasks: TXTCLASS
We manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.
This version only contain the annotated ESSAI corpus
| [
"# Dataset Card for ESSAI",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nWe manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.\n\nThis version only contain the annotated ESSAI corpus"
] | [
"TAGS\n#multilinguality-monolingual #language-French #license-other #region-us \n",
"# Dataset Card for ESSAI",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: False\n- Tasks: TXTCLASS\n\n\nWe manually annotated two corpora from the biomedical field. The ESSAI corpus contains clinical trial protocols in French. They were mainly obtained from the National Cancer Institute The typical protocol consists of two parts: the summary of the trial, which indicates the purpose of the trial and the methods applied; and a detailed description of the trial with the inclusion and exclusion criteria. The CAS corpus contains clinical cases published in scientific literature and training material. They are published in different journals from French-speaking countries (France, Belgium, Switzerland, Canada, African countries, tropical countries) and are related to various medical specialties (cardiology, urology, oncology, obstetrics, pulmonology, gastro-enterology). The purpose of clinical cases is to describe clinical situations of patients. Hence, their content is close to the content of clinical narratives (description of diagnoses, treatments or procedures, evolution, family history, expected audience, etc.). In clinical cases, the negation is frequently used for describing the patient signs, symptoms, and diagnosis. Speculation is present as well but less frequently.\n\nThis version only contain the annotated ESSAI corpus"
] |
da142461b82afb73b7bad03695c61d61a412de4f |
# Dataset Card for EU-ADR
## Dataset Description
- **Homepage:** https://www.sciencedirect.com/science/article/pii/S1532046412000573
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,RE
Corpora with specific entities and relationships annotated are essential to train and evaluate text-mining systems that are developed to extract specific structured information from a large corpus. In this paper we describe an approach where a named-entity recognition system produces a first annotation and annotators revise this annotation using a web-based interface. The agreement figures achieved show that the inter-annotator agreement is much better than the agreement with the system provided annotations. The corpus has been annotated for drugs, disorders, genes and their inter-relationships. For each of the drug-disorder, drug-target, and target-disorder relations three experts have annotated a set of 100 abstracts. These annotated relationships will be used to train and evaluate text-mining software to capture these relationships in texts.
## Citation Information
```
@article{VANMULLIGEN2012879,
title = {The EU-ADR corpus: Annotated drugs, diseases, targets, and their relationships},
journal = {Journal of Biomedical Informatics},
volume = {45},
number = {5},
pages = {879-884},
year = {2012},
note = {Text Mining and Natural Language Processing in Pharmacogenomics},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2012.04.004},
url = {https://www.sciencedirect.com/science/article/pii/S1532046412000573},
author = {Erik M. {van Mulligen} and Annie Fourrier-Reglat and David Gurwitz and Mariam Molokhia and Ainhoa Nieto and Gianluca Trifiro and Jan A. Kors and Laura I. Furlong},
keywords = {Text mining, Corpus development, Machine learning, Adverse drug reactions},
abstract = {Corpora with specific entities and relationships annotated are essential to train and evaluate text-mining systems that are developed to extract specific structured information from a large corpus. In this paper we describe an approach where a named-entity recognition system produces a first annotation and annotators revise this annotation using a web-based interface. The agreement figures achieved show that the inter-annotator agreement is much better than the agreement with the system provided annotations. The corpus has been annotated for drugs, disorders, genes and their inter-relationships. For each of the drug–disorder, drug–target, and target–disorder relations three experts have annotated a set of 100 abstracts. These annotated relationships will be used to train and evaluate text-mining software to capture these relationships in texts.}
}
```
| bigbio/euadr | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:08:25+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "EU-ADR", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://www.sciencedirect.com/science/article/pii/S1532046412000573", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "RELATION_EXTRACTION"]} | 2022-12-22T15:44:36+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for EU-ADR
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,RE
Corpora with specific entities and relationships annotated are essential to train and evaluate text-mining systems that are developed to extract specific structured information from a large corpus. In this paper we describe an approach where a named-entity recognition system produces a first annotation and annotators revise this annotation using a web-based interface. The agreement figures achieved show that the inter-annotator agreement is much better than the agreement with the system provided annotations. The corpus has been annotated for drugs, disorders, genes and their inter-relationships. For each of the drug-disorder, drug-target, and target-disorder relations three experts have annotated a set of 100 abstracts. These annotated relationships will be used to train and evaluate text-mining software to capture these relationships in texts.
| [
"# Dataset Card for EU-ADR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nCorpora with specific entities and relationships annotated are essential to train and evaluate text-mining systems that are developed to extract specific structured information from a large corpus. In this paper we describe an approach where a named-entity recognition system produces a first annotation and annotators revise this annotation using a web-based interface. The agreement figures achieved show that the inter-annotator agreement is much better than the agreement with the system provided annotations. The corpus has been annotated for drugs, disorders, genes and their inter-relationships. For each of the drug-disorder, drug-target, and target-disorder relations three experts have annotated a set of 100 abstracts. These annotated relationships will be used to train and evaluate text-mining software to capture these relationships in texts."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for EU-ADR",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,RE\n\n\nCorpora with specific entities and relationships annotated are essential to train and evaluate text-mining systems that are developed to extract specific structured information from a large corpus. In this paper we describe an approach where a named-entity recognition system produces a first annotation and annotators revise this annotation using a web-based interface. The agreement figures achieved show that the inter-annotator agreement is much better than the agreement with the system provided annotations. The corpus has been annotated for drugs, disorders, genes and their inter-relationships. For each of the drug-disorder, drug-target, and target-disorder relations three experts have annotated a set of 100 abstracts. These annotated relationships will be used to train and evaluate text-mining software to capture these relationships in texts."
] |
35dce6aba1b3eb9eb9af9bdc38ebeda73dad15b9 |
# Dataset Card for Evidence Inference 2.0
## Dataset Description
- **Homepage:** https://github.com/jayded/evidence-inference
- **Pubmed:** True
- **Public:** True
- **Tasks:** QA
The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple
treatments. Each of these articles will have multiple questions, or 'prompts' associated with them.
These prompts will ask about the relationship between an intervention and comparator with respect to an outcome,
as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared
to placebo on the duration of headaches. For the sake of this task, we assume that a particular article
will report that the intervention of interest either significantly increased, significantly decreased
or had significant effect on the outcome, relative to the comparator.
## Citation Information
```
@inproceedings{deyoung-etal-2020-evidence,
title = "Evidence Inference 2.0: More Data, Better Models",
author = "DeYoung, Jay and
Lehman, Eric and
Nye, Benjamin and
Marshall, Iain and
Wallace, Byron C.",
booktitle = "Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.bionlp-1.13",
pages = "123--132",
}
```
| bigbio/evidence_inference | [
"multilinguality:monolingual",
"language:en",
"license:mit",
"region:us"
] | 2022-11-13T22:08:29+00:00 | {"language": ["en"], "license": "mit", "multilinguality": "monolingual", "pretty_name": "Evidence Inference 2.0", "bigbio_language": ["English"], "bigbio_license_shortname": "MIT", "homepage": "https://github.com/jayded/evidence-inference", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["QUESTION_ANSWERING"]} | 2022-12-22T15:44:37+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-mit #region-us
|
# Dataset Card for Evidence Inference 2.0
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: QA
The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple
treatments. Each of these articles will have multiple questions, or 'prompts' associated with them.
These prompts will ask about the relationship between an intervention and comparator with respect to an outcome,
as reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared
to placebo on the duration of headaches. For the sake of this task, we assume that a particular article
will report that the intervention of interest either significantly increased, significantly decreased
or had significant effect on the outcome, relative to the comparator.
| [
"# Dataset Card for Evidence Inference 2.0",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: QA\n\n\nThe dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple\ntreatments. Each of these articles will have multiple questions, or 'prompts' associated with them.\nThese prompts will ask about the relationship between an intervention and comparator with respect to an outcome,\nas reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared\nto placebo on the duration of headaches. For the sake of this task, we assume that a particular article\nwill report that the intervention of interest either significantly increased, significantly decreased\nor had significant effect on the outcome, relative to the comparator."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-mit #region-us \n",
"# Dataset Card for Evidence Inference 2.0",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: QA\n\n\nThe dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple\ntreatments. Each of these articles will have multiple questions, or 'prompts' associated with them.\nThese prompts will ask about the relationship between an intervention and comparator with respect to an outcome,\nas reported in the trial. For example, a prompt may ask about the reported effects of aspirin as compared\nto placebo on the duration of headaches. For the sake of this task, we assume that a particular article\nwill report that the intervention of interest either significantly increased, significantly decreased\nor had significant effect on the outcome, relative to the comparator."
] |
664a5e5007c3a4e51d244c289f211013d529658d |
# Dataset Card for GENETAG
## Dataset Description
- **Homepage:** https://github.com/openbiocorpora/genetag
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
Named entity recognition (NER) is an important first step for text mining the biomedical literature.
Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus.
The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity
of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE®
sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition..
## Citation Information
```
@article{Tanabe2005,
author = {Lorraine Tanabe and Natalie Xie and Lynne H Thom and Wayne Matten and W John Wilbur},
title = {{GENETAG}: a tagged corpus for gene/protein named entity recognition},
journal = {{BMC} Bioinformatics},
volume = {6},
year = {2005},
url = {https://doi.org/10.1186/1471-2105-6-S1-S3},
doi = {10.1186/1471-2105-6-s1-s3},
biburl = {},
bibsource = {}
}
```
| bigbio/genetag | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:08:32+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "GENETAG", "bigbio_language": ["English"], "bigbio_license_shortname": "NCBI_LICENSE", "homepage": "https://github.com/openbiocorpora/genetag", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:44:38+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for GENETAG
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
Named entity recognition (NER) is an important first step for text mining the biomedical literature.
Evaluating the performance of biomedical NER systems is impossible without a standardized test corpus.
The annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity
of gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE®
sentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition..
| [
"# Dataset Card for GENETAG",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nNamed entity recognition (NER) is an important first step for text mining the biomedical literature.\nEvaluating the performance of biomedical NER systems is impossible without a standardized test corpus.\nThe annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity\nof gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE®\nsentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition.."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for GENETAG",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nNamed entity recognition (NER) is an important first step for text mining the biomedical literature.\nEvaluating the performance of biomedical NER systems is impossible without a standardized test corpus.\nThe annotation of such a corpus for gene/protein name NER is a difficult process due to the complexity\nof gene/protein names. We describe the construction and annotation of GENETAG, a corpus of 20K MEDLINE®\nsentences for gene/protein NER. 15K GENETAG sentences were used for the BioCreAtIvE Task 1A Competition.."
] |
3e15094fc56ec0b1feced8aa228eff3c55c54740 |
# Dataset Card for PTM Events
## Dataset Description
- **Homepage:** http://www.geniaproject.org/other-corpora/ptm-event-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,COREF,EE
Post-translational-modifications (PTM), amino acid modifications of proteins after translation, are one of the posterior processes of protein biosynthesis for many proteins, and they are critical for determining protein function such as its activity state, localization, turnover and interactions with other biomolecules. While there have been many studies of information extraction targeting individual PTM types, there was until recently little effort to address extraction of multiple PTM types at once in a unified framework.
## Citation Information
```
@inproceedings{ohta-etal-2010-event,
title = "Event Extraction for Post-Translational Modifications",
author = "Ohta, Tomoko and
Pyysalo, Sampo and
Miwa, Makoto and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the 2010 Workshop on Biomedical Natural Language Processing",
month = jul,
year = "2010",
address = "Uppsala, Sweden",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W10-1903",
pages = "19--27",
}
```
| bigbio/genia_ptm_event_corpus | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:08:36+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "PTM Events", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "http://www.geniaproject.org/other-corpora/ptm-event-corpus", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "COREFERENCE_RESOLUTION", "EVENT_EXTRACTION"]} | 2022-12-22T15:44:39+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for PTM Events
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,COREF,EE
Post-translational-modifications (PTM), amino acid modifications of proteins after translation, are one of the posterior processes of protein biosynthesis for many proteins, and they are critical for determining protein function such as its activity state, localization, turnover and interactions with other biomolecules. While there have been many studies of information extraction targeting individual PTM types, there was until recently little effort to address extraction of multiple PTM types at once in a unified framework.
| [
"# Dataset Card for PTM Events",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,COREF,EE\n\n\nPost-translational-modifications (PTM), amino acid modifications of proteins after translation, are one of the posterior processes of protein biosynthesis for many proteins, and they are critical for determining protein function such as its activity state, localization, turnover and interactions with other biomolecules. While there have been many studies of information extraction targeting individual PTM types, there was until recently little effort to address extraction of multiple PTM types at once in a unified framework."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for PTM Events",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,COREF,EE\n\n\nPost-translational-modifications (PTM), amino acid modifications of proteins after translation, are one of the posterior processes of protein biosynthesis for many proteins, and they are critical for determining protein function such as its activity state, localization, turnover and interactions with other biomolecules. While there have been many studies of information extraction targeting individual PTM types, there was until recently little effort to address extraction of multiple PTM types at once in a unified framework."
] |
03ed6ae9c7ea27ae2b430f71ea7a74853bd2fb5e |
# Dataset Card for GENIA Relation Corpus
## Dataset Description
- **Homepage:** http://www.geniaproject.org/genia-corpus/relation-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The extraction of various relations stated to hold between biomolecular entities is one of the most frequently
addressed information extraction tasks in domain studies. Typical relation extraction targets involve protein-protein
interactions or gene regulatory relations. However, in the GENIA corpus, such associations involving change in the
state or properties of biomolecules are captured in the event annotation.
The GENIA corpus relation annotation aims to complement the event annotation of the corpus by capturing (primarily)
static relations, relations such as part-of that hold between entities without (necessarily) involving change.
## Citation Information
```
@inproceedings{pyysalo-etal-2009-static,
title = "Static Relations: a Piece in the Biomedical Information Extraction Puzzle",
author = "Pyysalo, Sampo and
Ohta, Tomoko and
Kim, Jin-Dong and
Tsujii, Jun{'}ichi",
booktitle = "Proceedings of the {B}io{NLP} 2009 Workshop",
month = jun,
year = "2009",
address = "Boulder, Colorado",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/W09-1301",
pages = "1--9",
}
@article{article,
author = {Ohta, Tomoko and Pyysalo, Sampo and Kim, Jin-Dong and Tsujii, Jun'ichi},
year = {2010},
month = {10},
pages = {917-28},
title = {A reevaluation of biomedical named entity - term relations},
volume = {8},
journal = {Journal of bioinformatics and computational biology},
doi = {10.1142/S0219720010005014}
}
@MISC{Hoehndorf_applyingontology,
author = {Robert Hoehndorf and Axel-cyrille Ngonga Ngomo and Sampo Pyysalo and Tomoko Ohta and Anika Oellrich and
Dietrich Rebholz-schuhmann},
title = {Applying ontology design patterns to the implementation of relations in GENIA},
year = {}
}
```
| bigbio/genia_relation_corpus | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:08:39+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "GENIA Relation Corpus", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "http://www.geniaproject.org/genia-corpus/relation-corpus", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["RELATION_EXTRACTION"]} | 2022-12-22T15:44:40+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for GENIA Relation Corpus
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: RE
The extraction of various relations stated to hold between biomolecular entities is one of the most frequently
addressed information extraction tasks in domain studies. Typical relation extraction targets involve protein-protein
interactions or gene regulatory relations. However, in the GENIA corpus, such associations involving change in the
state or properties of biomolecules are captured in the event annotation.
The GENIA corpus relation annotation aims to complement the event annotation of the corpus by capturing (primarily)
static relations, relations such as part-of that hold between entities without (necessarily) involving change.
| [
"# Dataset Card for GENIA Relation Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE\n\n\nThe extraction of various relations stated to hold between biomolecular entities is one of the most frequently\naddressed information extraction tasks in domain studies. Typical relation extraction targets involve protein-protein\ninteractions or gene regulatory relations. However, in the GENIA corpus, such associations involving change in the\nstate or properties of biomolecules are captured in the event annotation.\n\nThe GENIA corpus relation annotation aims to complement the event annotation of the corpus by capturing (primarily)\nstatic relations, relations such as part-of that hold between entities without (necessarily) involving change."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for GENIA Relation Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE\n\n\nThe extraction of various relations stated to hold between biomolecular entities is one of the most frequently\naddressed information extraction tasks in domain studies. Typical relation extraction targets involve protein-protein\ninteractions or gene regulatory relations. However, in the GENIA corpus, such associations involving change in the\nstate or properties of biomolecules are captured in the event annotation.\n\nThe GENIA corpus relation annotation aims to complement the event annotation of the corpus by capturing (primarily)\nstatic relations, relations such as part-of that hold between entities without (necessarily) involving change."
] |
c556529b5f8b5e4ffe2c10f23209e70c4390f3e1 |
# Dataset Card for GENIA Term Corpus
## Dataset Description
- **Homepage:** http://www.geniaproject.org/genia-corpus/term-corpus
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER
The identification of linguistic expressions referring to entities of interest in molecular biology such as proteins,
genes and cells is a fundamental task in biomolecular text mining. The GENIA technical term annotation covers the
identification of physical biological entities as well as other important terms. The corpus annotation covers the full
1,999 abstracts of the primary GENIA corpus.
## Citation Information
```
@inproceedings{10.5555/1289189.1289260,
author = {Ohta, Tomoko and Tateisi, Yuka and Kim, Jin-Dong},
title = {The GENIA Corpus: An Annotated Research Abstract Corpus in Molecular Biology Domain},
year = {2002},
publisher = {Morgan Kaufmann Publishers Inc.},
address = {San Francisco, CA, USA},
booktitle = {Proceedings of the Second International Conference on Human Language Technology Research},
pages = {82–86},
numpages = {5},
location = {San Diego, California},
series = {HLT '02}
}
@article{Kim2003GENIAC,
title={GENIA corpus - a semantically annotated corpus for bio-textmining},
author={Jin-Dong Kim and Tomoko Ohta and Yuka Tateisi and Junichi Tsujii},
journal={Bioinformatics},
year={2003},
volume={19 Suppl 1},
pages={
i180-2
}
}
@inproceedings{10.5555/1567594.1567610,
author = {Kim, Jin-Dong and Ohta, Tomoko and Tsuruoka, Yoshimasa and Tateisi, Yuka and Collier, Nigel},
title = {Introduction to the Bio-Entity Recognition Task at JNLPBA},
year = {2004},
publisher = {Association for Computational Linguistics},
address = {USA},
booktitle = {Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and Its
Applications},
pages = {70–75},
numpages = {6},
location = {Geneva, Switzerland},
series = {JNLPBA '04}
}
```
| bigbio/genia_term_corpus | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
] | 2022-11-13T22:08:43+00:00 | {"language": ["en"], "license": "other", "multilinguality": "monolingual", "pretty_name": "GENIA Term Corpus", "bigbio_language": ["English"], "bigbio_license_shortname": "GENIA_PROJECT_LICENSE", "homepage": "http://www.geniaproject.org/genia-corpus/term-corpus", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:44:41+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-other #region-us
|
# Dataset Card for GENIA Term Corpus
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER
The identification of linguistic expressions referring to entities of interest in molecular biology such as proteins,
genes and cells is a fundamental task in biomolecular text mining. The GENIA technical term annotation covers the
identification of physical biological entities as well as other important terms. The corpus annotation covers the full
1,999 abstracts of the primary GENIA corpus.
| [
"# Dataset Card for GENIA Term Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe identification of linguistic expressions referring to entities of interest in molecular biology such as proteins,\ngenes and cells is a fundamental task in biomolecular text mining. The GENIA technical term annotation covers the\nidentification of physical biological entities as well as other important terms. The corpus annotation covers the full\n1,999 abstracts of the primary GENIA corpus."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n",
"# Dataset Card for GENIA Term Corpus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER\n\n\nThe identification of linguistic expressions referring to entities of interest in molecular biology such as proteins,\ngenes and cells is a fundamental task in biomolecular text mining. The GENIA technical term annotation covers the\nidentification of physical biological entities as well as other important terms. The corpus annotation covers the full\n1,999 abstracts of the primary GENIA corpus."
] |
5256724fced919bd69ec2f9d6f2e7822a59b5f81 |
# Dataset Card for GEOKhoj v1
## Dataset Description
- **Homepage:** https://github.com/ElucidataInc/GEOKhoj-datasets/tree/main/geokhoj_v1
- **Pubmed:** False
- **Public:** True
- **Tasks:** TXTCLASS
GEOKhoj v1 is a annotated corpus of control/perturbation labels for 30,000 samples
from Microarray, Transcriptomics and Single cell experiments which are available on
the GEO (Gene Expression Omnibus) database
## Citation Information
```
@misc{geokhoj_v1,
author = {Elucidata, Inc.},
title = {GEOKhoj v1},
howpublished = {\url{https://github.com/ElucidataInc/GEOKhoj-datasets/tree/main/geokhoj_v1}},
}
```
| bigbio/geokhoj_v1 | [
"multilinguality:monolingual",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | 2022-11-13T22:08:46+00:00 | {"language": ["en"], "license": "cc-by-nc-4.0", "multilinguality": "monolingual", "pretty_name": "GEOKhoj v1", "bigbio_language": ["English"], "bigbio_license_shortname": "CC_BY_NC_4p0", "homepage": "https://github.com/ElucidataInc/GEOKhoj-datasets/tree/main/geokhoj_v1", "bigbio_pubmed": false, "bigbio_public": true, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:44:42+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-cc-by-nc-4.0 #region-us
|
# Dataset Card for GEOKhoj v1
## Dataset Description
- Homepage: URL
- Pubmed: False
- Public: True
- Tasks: TXTCLASS
GEOKhoj v1 is a annotated corpus of control/perturbation labels for 30,000 samples
from Microarray, Transcriptomics and Single cell experiments which are available on
the GEO (Gene Expression Omnibus) database
| [
"# Dataset Card for GEOKhoj v1",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\nGEOKhoj v1 is a annotated corpus of control/perturbation labels for 30,000 samples\nfrom Microarray, Transcriptomics and Single cell experiments which are available on\nthe GEO (Gene Expression Omnibus) database"
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-cc-by-nc-4.0 #region-us \n",
"# Dataset Card for GEOKhoj v1",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: False\n- Public: True\n- Tasks: TXTCLASS\n\n\nGEOKhoj v1 is a annotated corpus of control/perturbation labels for 30,000 samples\nfrom Microarray, Transcriptomics and Single cell experiments which are available on\nthe GEO (Gene Expression Omnibus) database"
] |
0be64eff331d4951c8d04347c711bca08c715f39 |
# Dataset Card for GNormPlus
## Dataset Description
- **Homepage:** https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/gnormplus/
- **Pubmed:** True
- **Public:** True
- **Tasks:** NER,NED
We re-annotated two existing gene corpora. The BioCreative II GN corpus is a widely used data set for benchmarking GN
tools and includes document-level annotations for a total of 543 articles (281 in its training set; and 262 in test).
The Citation GIA Test Collection was recently created for gene indexing at the NLM and includes 151 PubMed abstracts
with both mention-level and document-level annotations. They are selected because both have a focus on human genes.
For both corpora, we added annotations of gene families and protein domains. For the BioCreative GN corpus, we also
added mention-level gene annotations. As a result, in our new corpus, there are a total of 694 PubMed articles.
PubTator was used as our annotation tool along with BioC formats.
## Citation Information
```
@Article{Wei2015,
author={Wei, Chih-Hsuan and Kao, Hung-Yu and Lu, Zhiyong},
title={GNormPlus: An Integrative Approach for Tagging Genes, Gene Families, and Protein Domains},
journal={BioMed Research International},
year={2015},
month={Aug},
day={25},
publisher={Hindawi Publishing Corporation},
volume={2015},
pages={918710},
issn={2314-6133},
doi={10.1155/2015/918710},
url={https://doi.org/10.1155/2015/918710}
}
```
| bigbio/gnormplus | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:08:50+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "GNormPlus", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "https://www.ncbi.nlm.nih.gov/research/bionlp/Tools/gnormplus/", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["NAMED_ENTITY_RECOGNITION", "NAMED_ENTITY_DISAMBIGUATION"]} | 2023-02-17T14:55:04+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for GNormPlus
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: NER,NED
We re-annotated two existing gene corpora. The BioCreative II GN corpus is a widely used data set for benchmarking GN
tools and includes document-level annotations for a total of 543 articles (281 in its training set; and 262 in test).
The Citation GIA Test Collection was recently created for gene indexing at the NLM and includes 151 PubMed abstracts
with both mention-level and document-level annotations. They are selected because both have a focus on human genes.
For both corpora, we added annotations of gene families and protein domains. For the BioCreative GN corpus, we also
added mention-level gene annotations. As a result, in our new corpus, there are a total of 694 PubMed articles.
PubTator was used as our annotation tool along with BioC formats.
| [
"# Dataset Card for GNormPlus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nWe re-annotated two existing gene corpora. The BioCreative II GN corpus is a widely used data set for benchmarking GN\ntools and includes document-level annotations for a total of 543 articles (281 in its training set; and 262 in test).\nThe Citation GIA Test Collection was recently created for gene indexing at the NLM and includes 151 PubMed abstracts\nwith both mention-level and document-level annotations. They are selected because both have a focus on human genes.\nFor both corpora, we added annotations of gene families and protein domains. For the BioCreative GN corpus, we also\nadded mention-level gene annotations. As a result, in our new corpus, there are a total of 694 PubMed articles.\nPubTator was used as our annotation tool along with BioC formats."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for GNormPlus",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: NER,NED\n\n\nWe re-annotated two existing gene corpora. The BioCreative II GN corpus is a widely used data set for benchmarking GN\ntools and includes document-level annotations for a total of 543 articles (281 in its training set; and 262 in test).\nThe Citation GIA Test Collection was recently created for gene indexing at the NLM and includes 151 PubMed abstracts\nwith both mention-level and document-level annotations. They are selected because both have a focus on human genes.\nFor both corpora, we added annotations of gene families and protein domains. For the BioCreative GN corpus, we also\nadded mention-level gene annotations. As a result, in our new corpus, there are a total of 694 PubMed articles.\nPubTator was used as our annotation tool along with BioC formats."
] |
5177d3fb0681f27af37431f46617fea31d50bdc3 |
# Dataset Card for Hallmarks of Cancer
## Dataset Description
- **Homepage:** https://github.com/sb895/Hallmarks-of-Cancer
- **Pubmed:** True
- **Public:** True
- **Tasks:** TXTCLASS
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication
abstracts manually annotated by experts according to a taxonomy. The taxonomy
consists of 37 classes in a hierarchy. Zero or more class labels are assigned
to each sentence in the corpus. The labels are found under the "labels"
directory, while the tokenized text can be found under "text" directory.
The filenames are the corresponding PubMed IDs (PMID).
## Citation Information
```
@article{DBLP:journals/bioinformatics/BakerSGAHSK16,
author = {Simon Baker and
Ilona Silins and
Yufan Guo and
Imran Ali and
Johan H{"{o}}gberg and
Ulla Stenius and
Anna Korhonen},
title = {Automatic semantic classification of scientific literature
according to the hallmarks of cancer},
journal = {Bioinform.},
volume = {32},
number = {3},
pages = {432--440},
year = {2016},
url = {https://doi.org/10.1093/bioinformatics/btv585},
doi = {10.1093/bioinformatics/btv585},
timestamp = {Thu, 14 Oct 2021 08:57:44 +0200},
biburl = {https://dblp.org/rec/journals/bioinformatics/BakerSGAHSK16.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
| bigbio/hallmarks_of_cancer | [
"multilinguality:monolingual",
"language:en",
"license:gpl-3.0",
"region:us"
] | 2022-11-13T22:08:53+00:00 | {"language": ["en"], "license": "gpl-3.0", "multilinguality": "monolingual", "pretty_name": "Hallmarks of Cancer", "bigbio_language": ["English"], "bigbio_license_shortname": "GPL_3p0", "homepage": "https://github.com/sb895/Hallmarks-of-Cancer", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["TEXT_CLASSIFICATION"]} | 2022-12-22T15:44:44+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-gpl-3.0 #region-us
|
# Dataset Card for Hallmarks of Cancer
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: TXTCLASS
The Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication
abstracts manually annotated by experts according to a taxonomy. The taxonomy
consists of 37 classes in a hierarchy. Zero or more class labels are assigned
to each sentence in the corpus. The labels are found under the "labels"
directory, while the tokenized text can be found under "text" directory.
The filenames are the corresponding PubMed IDs (PMID).
| [
"# Dataset Card for Hallmarks of Cancer",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: TXTCLASS\n\n\nThe Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication\nabstracts manually annotated by experts according to a taxonomy. The taxonomy\nconsists of 37 classes in a hierarchy. Zero or more class labels are assigned\nto each sentence in the corpus. The labels are found under the \"labels\"\ndirectory, while the tokenized text can be found under \"text\" directory.\nThe filenames are the corresponding PubMed IDs (PMID)."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-gpl-3.0 #region-us \n",
"# Dataset Card for Hallmarks of Cancer",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: TXTCLASS\n\n\nThe Hallmarks of Cancer (HOC) Corpus consists of 1852 PubMed publication\nabstracts manually annotated by experts according to a taxonomy. The taxonomy\nconsists of 37 classes in a hierarchy. Zero or more class labels are assigned\nto each sentence in the corpus. The labels are found under the \"labels\"\ndirectory, while the tokenized text can be found under \"text\" directory.\nThe filenames are the corresponding PubMed IDs (PMID)."
] |
12192d76a4d1cf1fbad39f119df56135bd206e5b |
# Dataset Card for HPRD50
## Dataset Description
- **Homepage:**
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE,NER
HPRD50 is a dataset of randomly selected, hand-annotated abstracts of biomedical papers
referenced by the Human Protein Reference Database (HPRD). It is parsed in XML format,
splitting each abstract into sentences, and in each sentence there may be entities and
interactions between those entities. In this particular dataset, entities are all
proteins and interactions are thus protein-protein interactions.
Moreover, all entities are normalized to the HPRD database. These normalized terms are
stored in each entity's 'type' attribute in the source XML. This means the dataset can
determine e.g. that "Janus kinase 2" and "Jak2" are referencing the same normalized
entity.
Because the dataset contains entities and relations, it is suitable for Named Entity
Recognition and Relation Extraction.
## Citation Information
```
@article{fundel2007relex,
title={RelEx—Relation extraction using dependency parse trees},
author={Fundel, Katrin and K{"u}ffner, Robert and Zimmer, Ralf},
journal={Bioinformatics},
volume={23},
number={3},
pages={365--371},
year={2007},
publisher={Oxford University Press}
}
```
| bigbio/hprd50 | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:08:57+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "HPRD50", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["RELATION_EXTRACTION", "NAMED_ENTITY_RECOGNITION"]} | 2022-12-22T15:44:46+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for HPRD50
## Dataset Description
- Homepage:
- Pubmed: True
- Public: True
- Tasks: RE,NER
HPRD50 is a dataset of randomly selected, hand-annotated abstracts of biomedical papers
referenced by the Human Protein Reference Database (HPRD). It is parsed in XML format,
splitting each abstract into sentences, and in each sentence there may be entities and
interactions between those entities. In this particular dataset, entities are all
proteins and interactions are thus protein-protein interactions.
Moreover, all entities are normalized to the HPRD database. These normalized terms are
stored in each entity's 'type' attribute in the source XML. This means the dataset can
determine e.g. that "Janus kinase 2" and "Jak2" are referencing the same normalized
entity.
Because the dataset contains entities and relations, it is suitable for Named Entity
Recognition and Relation Extraction.
| [
"# Dataset Card for HPRD50",
"## Dataset Description\n\n- Homepage: \n- Pubmed: True\n- Public: True\n- Tasks: RE,NER\n\n\nHPRD50 is a dataset of randomly selected, hand-annotated abstracts of biomedical papers\nreferenced by the Human Protein Reference Database (HPRD). It is parsed in XML format,\nsplitting each abstract into sentences, and in each sentence there may be entities and\ninteractions between those entities. In this particular dataset, entities are all\nproteins and interactions are thus protein-protein interactions.\n\nMoreover, all entities are normalized to the HPRD database. These normalized terms are\nstored in each entity's 'type' attribute in the source XML. This means the dataset can\ndetermine e.g. that \"Janus kinase 2\" and \"Jak2\" are referencing the same normalized\nentity.\n\nBecause the dataset contains entities and relations, it is suitable for Named Entity\nRecognition and Relation Extraction."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for HPRD50",
"## Dataset Description\n\n- Homepage: \n- Pubmed: True\n- Public: True\n- Tasks: RE,NER\n\n\nHPRD50 is a dataset of randomly selected, hand-annotated abstracts of biomedical papers\nreferenced by the Human Protein Reference Database (HPRD). It is parsed in XML format,\nsplitting each abstract into sentences, and in each sentence there may be entities and\ninteractions between those entities. In this particular dataset, entities are all\nproteins and interactions are thus protein-protein interactions.\n\nMoreover, all entities are normalized to the HPRD database. These normalized terms are\nstored in each entity's 'type' attribute in the source XML. This means the dataset can\ndetermine e.g. that \"Janus kinase 2\" and \"Jak2\" are referencing the same normalized\nentity.\n\nBecause the dataset contains entities and relations, it is suitable for Named Entity\nRecognition and Relation Extraction."
] |
26c3374dd37305fd8ad410ded80fb9cd0db7fde8 |
# Dataset Card for IEPA
## Dataset Description
- **Homepage:** http://psb.stanford.edu/psb-online/proceedings/psb02/abstracts/p326.html
- **Pubmed:** True
- **Public:** True
- **Tasks:** RE
The IEPA benchmark PPI corpus is designed for relation extraction. It was created from 303 PubMed abstracts, each of which contains a specific pair of co-occurring chemicals.
## Citation Information
```
@ARTICLE{ding2001mining,
title = "Mining {MEDLINE}: abstracts, sentences, or phrases?",
author = "Ding, J and Berleant, D and Nettleton, D and Wurtele, E",
journal = "Pac Symp Biocomput",
pages = "326--337",
year = 2002,
address = "United States",
language = "en"
}
```
| bigbio/iepa | [
"multilinguality:monolingual",
"language:en",
"license:unknown",
"region:us"
] | 2022-11-13T22:09:00+00:00 | {"language": ["en"], "license": "unknown", "multilinguality": "monolingual", "pretty_name": "IEPA", "bigbio_language": ["English"], "bigbio_license_shortname": "UNKNOWN", "homepage": "http://psb.stanford.edu/psb-online/proceedings/psb02/abstracts/p326.html", "bigbio_pubmed": true, "bigbio_public": true, "bigbio_tasks": ["RELATION_EXTRACTION"]} | 2022-12-22T15:44:47+00:00 | [] | [
"en"
] | TAGS
#multilinguality-monolingual #language-English #license-unknown #region-us
|
# Dataset Card for IEPA
## Dataset Description
- Homepage: URL
- Pubmed: True
- Public: True
- Tasks: RE
The IEPA benchmark PPI corpus is designed for relation extraction. It was created from 303 PubMed abstracts, each of which contains a specific pair of co-occurring chemicals.
| [
"# Dataset Card for IEPA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE\n\n\nThe IEPA benchmark PPI corpus is designed for relation extraction. It was created from 303 PubMed abstracts, each of which contains a specific pair of co-occurring chemicals."
] | [
"TAGS\n#multilinguality-monolingual #language-English #license-unknown #region-us \n",
"# Dataset Card for IEPA",
"## Dataset Description\n\n- Homepage: URL\n- Pubmed: True\n- Public: True\n- Tasks: RE\n\n\nThe IEPA benchmark PPI corpus is designed for relation extraction. It was created from 303 PubMed abstracts, each of which contains a specific pair of co-occurring chemicals."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.